Signal Processing Signal and System Classifications. Chapter 13

Similar documents
ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

Review of Fourier Transform

2A1H Time-Frequency Analysis II

2A1H Time-Frequency Analysis II Bugs/queries to HT 2011 For hints and answers visit dwm/courses/2tf

GATE EE Topic wise Questions SIGNALS & SYSTEMS

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

3F1 Random Processes Examples Paper (for all 6 lectures)

ENSC327 Communications Systems 2: Fourier Representations. School of Engineering Science Simon Fraser University

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Chapter 5 Frequency Domain Analysis of Systems

Question Paper Code : AEC11T02

LECTURE 12 Sections Introduction to the Fourier series of periodic signals

Chapter 5 Frequency Domain Analysis of Systems

EC Signals and Systems

EE 224 Signals and Systems I Review 1/10

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

E2.5 Signals & Linear Systems. Tutorial Sheet 1 Introduction to Signals & Systems (Lectures 1 & 2)

Chapter 4 The Fourier Series and Fourier Transform

Chapter 4 The Fourier Series and Fourier Transform

1 Signals and systems

EE 3054: Signals, Systems, and Transforms Summer It is observed of some continuous-time LTI system that the input signal.

Fourier Analysis and Power Spectral Density

LOPE3202: Communication Systems 10/18/2017 2

Signals and Spectra (1A) Young Won Lim 11/26/12

PROBABILITY AND RANDOM PROCESSESS

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book

Statistical signal processing

Lecture 8 ELE 301: Signals and Systems

University Question Paper Solution

A system that is both linear and time-invariant is called linear time-invariant (LTI).

Discrete Fourier Transform

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Review of Discrete-Time System

EE538 Final Exam Fall :20 pm -5:20 pm PHYS 223 Dec. 17, Cover Sheet

Fourier Series. Spectral Analysis of Periodic Signals

Fundamentals of Noise

Figure 3.1 Effect on frequency spectrum of increasing period T 0. Consider the amplitude spectrum of a periodic waveform as shown in Figure 3.2.

Problem Sheet 1 Examples of Random Processes

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

Signals and Spectra - Review

Multimedia Signals and Systems - Audio and Video. Signal, Image, Video Processing Review-Introduction, MP3 and MPEG2

Communications and Signal Processing Spring 2017 MSE Exam

Lecture 4 - Spectral Estimation

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

1.1 SPECIAL FUNCTIONS USED IN SIGNAL PROCESSING. δ(t) = for t = 0, = 0 for t 0. δ(t)dt = 1. (1.1)

Review of Linear Time-Invariant Network Analysis

Grades will be determined by the correctness of your answers (explanations are not required).

Lecture 3 - Design of Digital Filters

Review of Fundamentals of Digital Signal Processing

ω 0 = 2π/T 0 is called the fundamental angular frequency and ω 2 = 2ω 0 is called the

Aspects of Continuous- and Discrete-Time Signals and Systems

EE303: Communication Systems

Communication Systems Lecture 21, 22. Dong In Kim School of Information & Comm. Eng. Sungkyunkwan University

The Discrete Fourier Transform

ENT 315 Medical Signal Processing CHAPTER 2 DISCRETE FOURIER TRANSFORM. Dr. Lim Chee Chin

-Digital Signal Processing- FIR Filter Design. Lecture May-16

Name of the Student: Problems on Discrete & Continuous R.Vs

Chirp Transform for FFT

Radar Systems Engineering Lecture 3 Review of Signals, Systems and Digital Signal Processing

Random signals II. ÚPGM FIT VUT Brno,

16.584: Random (Stochastic) Processes

Ver 3808 E1.10 Fourier Series and Transforms (2014) E1.10 Fourier Series and Transforms. Problem Sheet 1 (Lecture 1)

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

UNIVERSITY OF OSLO. Faculty of mathematics and natural sciences. Forslag til fasit, versjon-01: Problem 1 Signals and systems.

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt

Correlator I. Basics. Chapter Introduction. 8.2 Digitization Sampling. D. Anish Roshi

EEM 409. Random Signals. Problem Set-2: (Power Spectral Density, LTI Systems with Random Inputs) Problem 1: Problem 2:

Sound & Vibration Magazine March, Fundamentals of the Discrete Fourier Transform

School of Information Technology and Electrical Engineering EXAMINATION. ELEC3004 Signals, Systems & Control

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

EE 438 Essential Definitions and Relations

Notes 07 largely plagiarized by %khc

The Fourier Transform (and more )

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Discrete Time Signals and Systems Time-frequency Analysis. Gloria Menegaz

Continuous Time Signal Analysis: the Fourier Transform. Lathi Chapter 4

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I

7 The Waveform Channel

Square Root Raised Cosine Filter

Stochastic Processes

Grades will be determined by the correctness of your answers (explanations are not required).

ECE 636: Systems identification

2. (a) What is gaussian random variable? Develop an equation for guassian distribution

Theory and Problems of Signals and Systems

Representing a Signal

Ch.11 The Discrete-Time Fourier Transform (DTFT)

Digital Signal Processing Lecture 9 - Design of Digital Filters - FIR

! Introduction. ! Discrete Time Signals & Systems. ! Z-Transform. ! Inverse Z-Transform. ! Sampling of Continuous Time Signals

SEISMIC WAVE PROPAGATION. Lecture 2: Fourier Analysis

Time and Spatial Series and Transforms

MEDE2500 Tutorial Nov-7

Lecture 8: Signal Reconstruction, DT vs CT Processing. 8.1 Reconstruction of a Band-limited Signal from its Samples

Poularikas A. D. Fourier Transform The Handbook of Formulas and Tables for Signal Processing. Ed. Alexander D. Poularikas Boca Raton: CRC Press LLC,

Hilbert Transforms in Signal Processing

The Continuous-time Fourier

Lecture 27 Frequency Response 2

ECE 3084 QUIZ 2 SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING GEORGIA INSTITUTE OF TECHNOLOGY APRIL 2, Name:

Experimental Fourier Transforms

Discrete-Time Signals: Time-Domain Representation

Transcription:

Chapter 3 Signal Processing 3.. Signal and System Classifications In general, electrical signals can represent either current or voltage, and may be classified into two main categories: energy signals and power signals. Energy signals can be deterministic or random, while power signals can be periodic or random. A signal is said to be random if it is a function of a random parameter (such as random phase or random amplitude). Additionally, signals may be divided into low pass or band pass signals. Signals that contain very low frequencies (close to DC) are called low pass signals; otherwise they are referred to as band pass signals. Through modulation, low pass signals can be mapped into band pass signals. The average power P for the current or voltage signal x() t over the interval ( t, t ) across a Ω resistor is P t ------------- xt () dt t t (3.) t The signal xt () is said to be a power signal over a very large interval T t t, if and only if it has finite power; it must satisfy the following relation: 0 < lim T T T -- xt () dt < T (3.)

t Using Parseval s theorem, the energy E dissipated by the current or voltage signal xt () across a Ω resistor, over the interval ( t, t ), is E x() t dt (3.3) t The signal energy, xt () is said to be an energy signal if and only if it has finite E x() t dt < A signal xt () is said to be periodic with period T if and only if (3.4) where n is an integer. (3.5) Example 3.: Classify each of the following signals as an energy signal, as a power signal, or as neither. All signals are defined over the interval ( < t < ) : x () t cost + cost, x () t exp( α t ). Solution: xt () xt ( + nt) for all t T P x -- cost + cost T ( ) dt power signal T Note that since the cosine function is periodic, the limit is not necessary. E ( x e α t ) dt e α t π dt ------------- -- π -- energy signal. α α 0 Electrical systems can be linear or nonlinear. Furthermore, linear systems may be divided into continuous or discrete. A system is linear if the input signal x () t produces y () t and x () t produces y () t ; then for some arbitrary constants a and a the input signal a x () t + a x () t produces the output a y () t + a y () t. A linear system is said to be shift invariant (or time invariant) if a time shift at its input produces the same shift at its output. More precisely, if the input signal xt () produces yt () then the delayed signal xt ( t 0 ) produces the output yt ( t 0 ). The impulse response of a Linear Time Invariant (LTI) system, ht (), is defined to be the system s output when the input is an impulse (delta function).

3.. The Fourier Transform The Fourier Transform (FT) of the signal xt () is or F{ x() t } X( ω) xt ()e jωt dt (3.6) F{ x() t } Xf () xt and the Inverse Fourier Transform (IFT) is ()e jπft dt (3.7) or { X( ω) } xt () F π ----- X ω ( )e jωt dω (3.8) (3.9) where, in general, t represents time, while ω πf and f represent frequency in radians per second and Hertz, respectively. In this book we will use both notations for the transform, as appropriate (i.e., X( ω) and Xf ()). A detailed table of the FT pairs is listed in Appendix C. The FT properties are (the proofs are left as an exercise):. Linearity: { Xf ()} xt () Xf F ()e jπft df F{ a x () t + a x () t } a X ( ω) + a X ( ω) (3.0). Symmetry: If F{ x() t } X( ω) then 3. Shifting: For any real time πx( ω) Xt t 0 ()e jωt dt (3.)

F{ x( t ± t 0 )} e jωt 0 X( ω) ± (3.) 4. Scaling: If F{ x() t } X( ω) then F{ x( at) } ----- X ω a ā -- (3.3) 5. Central Ordinate: X( 0) xt () dt x( 0) 6. Frequency Shift: If F{ x() t } X( ω) then π ----- X ( ω ) dω F e ω 0 { ± t xt ()} X( ω + ω 0 ) (3.4) (3.5) (3.6) 7. Modulation: If F{ x() t } X( ω) then F{ x() t cosω 0 t} -- [ X( ω + ω 0 ) + X( ω ω 0 )] (3.7) F{ x() t sin( ω 0 t) } ---- [ X( ω ω j 0 ) X( ω + ω 0 )] (3.8) 8. Derivatives: d F d n t n ( xt ()) ( jω) n X( ω) (3.9) 9. Time Convolution: if xt () and ht () have Fourier transforms X( ω) and H( ω), respectively, then F x( τ)ht ( τ) dτ X( ω)h( ω) (3.0)

0. Frequency Convolution: F{ x()ht t }. Autocorrelation: π ----- X ( τ )H( ω τ) dτ (3.) F x( τ)x ( τ t) dτ X( ω)x ( ω) X( ω) (3.). Parseval s Theoerem: The energy associated with the signal xt () is 3. Moments: The nth moment is E x() t dt X( ω) dω (3.3) m n t n xt () dt 0 n d X( ω) dω n ω 0 (3.4) 3.3. The Fourier Series A set of functions S { ϕ n () t ; n,, N} the interval ( t, t ) if and only if is said to be orthogonal over t t ϕ i ()ϕ t j () t dt t ϕi t ()ϕ j () t dt t 0 i j λ i i j (3.5) where the asterisk indicates complex conjugate, and are constants. If λ i for all i, then the set S is said to be an orthonormal set. An electrical signal xt () can be expressed over the interval ( t, t ) as a weighted sum of a set of orthogonal functions as λ i N xt () X n ϕ n () t n (3.6)

where X n are, in general, complex constants, and the orthogonal functions ϕ n () t are called basis functions. If the integral-square error over the interval ( t, t ) is equal to zero as N approaches infinity, i.e., N lim xt () X n ϕ n () t dt 0 N t t n (3.7) then the set S { ϕ n () t } is said to be complete, and Eq. (3.) becomes an equality. The constants are computed as X n X n t xt ()ϕ n () t dt t ------------------------------------ t t ϕ n () t dt (3.8) Let the signal xt () be periodic with period T, and let the complete orthogonal set S be T S e ; n, Then the complex exponential Fourier series of xt () is Using Eq. (3.8) yields jπnt ------------- The FT of Eq. (3.30) is given by xt () X n e n T X n -- xt ()e T T jπnt ------------- T ---------------- jπnt T dt (3.9) (3.30) (3.3) X( ω) π X n δ ω πn --------- T n (3.3) where δ( ) is delta function. When the signal xt () is real we can compute its trigonometric Fourier series from Eq. (3.30) as

πnt xt () a 0 a n ----------- cos T πnt + + b sin n ----------- T n n (3.33) T a 0 X 0 a n -- πnt xt () ----------- T cos T dt T T b n -- πnt xt () ----------- T sin T dt T (3.34) The coefficients a n are all zeros when the signal xt () is an odd function of time. Alternatively, when the signal is an even function of time, then all b n are equal to zero. Consider the periodic energy signal defined in Eq. (3.33). The total energy associated with this signal is then given by E t + T -- xt () dt T t 0 a ---- 0 4 + n a ---- n b n + ---- (3.35) 3.4. Convolution and Correlation Integrals The convolution φ xh () t between the signals xt () and ht () is defined by φ xh () t xt () ht () x( τ)ht ( τ) dτ (3.36) where τ is a dummy variable, and the operator is used to symbolically describe the convolution integral. Convolution is commutative, associative, and distributive. More precisely, (3.37) For the convolution integral to be finite at least one of the two signals must be an energy signal. The convolution between two signals can be computed using the FT xt () ht () ht () xt () xt () ht () gt () ( xt () ht ()) gt () xt () ( ht () gt ()) φ xh () t F { X( ω)h( ω) } (3.38)

Consider an LTI system with impulse response ht () and input signal xt (). It follows that the output signal yt () is equal to the convolution between the input signal and the system impulse response, yt () x( τ)ht ( τ) dτ h( τ)xt ( τ) dτ (3.39) The cross-correlation function between the signals xt () and gt () is defined as (3.40) Again, at least one of the two signals should be an energy signal for the correlation integral to be finite. The cross-correlation function measures the similarity between the two signals. The peak value of R xg () t and its spread around this peak are an indication of how good this similarity is. The cross-correlation integral can be computed as R xg () t x ( τ)gt ( + τ) dτ When xt () gt () R xg () t F { X ( ω)g( ω) } we get the autocorrelation integral, (3.4) R x () t x ( τ)xt ( + τ) dτ (3.4) Note that the autocorrelation function is denoted by R x () t rather than R xx () t. When the signals xt () and gt () are power signals, the correlation integral becomes infinite and thus, time averaging must be included. More precisely, Rxg() t lim T -- T T T x ( τ)gt ( + τ) dτ (3.43) 3.5. Energy and Power Spectrum Densities Consider an energy signal associated with this signal is xt (). From Parseval s theorem, the total energy

E x() t dt π ----- X ω ( ) dω (3.44) When xt () is a voltage signal, the amount of energy dissipated by this signal when applied across a network of resistance R is E Alternatively, when -- xt () dt R xt () πr --------- X ω is a current signal we get ( ) dω (3.45) E R x() t dt R π ----- X ω ( ) dω (3.46) The quantity X( ω) dω represents the amount of energy spread per unit frequency across a Ω resistor; therefore, the Energy Spectrum Density (ESD) function for the energy signal xt () is defined as ESD X( ω) The ESD at the output of an LTI system when xt () is at its input is (3.47) Y( ω) X( ω) H( ω) (3.48) where H( ω) is the FT of the system impulse response, ht (). It follows that the energy present at the output of the system is E y ----- X( ω) H( ω) dω π (3.49) Example 3.: The voltage signal xt () e 5t ; t 0 is applied to the input of a low pass LTI system. The system bandwidth is 5Hz, and its input resistance is 5Ω. If H( ω) over the interval ( 0π < ω < 0π) and zero elsewhere, compute the energy at the output. Solution: From Eqs. (3.45) and (3.49) we get E y --------- X( ω) H( ω) dω πr ω 0π 0π

Using Fourier transform tables and substituting R 5 yield E y ----- 5π Completing the integration yields 0π Note that an infinite bandwidth would give ------------------ dω ω + 5 The total power associated with a power signal 0 E y -------- [ atanh( π) atanh( 0) ] 0.0799 Joules 5π E y 0.0 gt (), only % larger. is Define the Power Spectrum Density (PSD) function for the signal S g ( ω), where P It can be shown that (see Problem.3) P lim T T T T -- gt () dt T lim -- gt () dt T T T π ----- S ω g ( ) dω (3.50) gt () as (3.5) S g ( ω) lim T G( ω) ------------------ T (3.5) Let the signals xt () and gt () be two periodic signals with period T. The complex exponential Fourier series expansions for those signals are, respectively, given by xt () X n e n jπnt ------------- T (3.53) gt () G m e The power cross-correlation function repeated here as Eq. (3.55), m --------------- jπmt T Rgx() t (3.54) was given in Eq. (3.43), and is

Rgx() t T T -- g ( τ )xt ( + τ) dτ T (3.55) Note that because both signals are periodic the limit is no longer necessary. Substituting Eqs. (3.53) and (3.54) into Eq. (3.55), collecting terms, and using the definition of orthogonality, we get Rgx t () G n X n e n jnπt ------------- T (3.56) When xt () gt (), Eq. (3.56) becomes the power autocorrelation function, jnπt ------------- T () X n e X 0 + Rx t n jnπt ------------- T X n e (3.57) The power spectrum and cross-power spectrum density functions are then computed as the FT of Eqs. (3.57) and (3.56), respectively. More precisely, n ( ) π X n δ ω nπ --------- T Sx ω Sgx ω n ( ) π G n X n δ ω nπ --------- T n (3.58) The line (or discrete) power spectrum is defined as the plot of versus n, where the lines are f T apart. The DC power is X 0, and the total power is. n X n 3.6. Random Variables X n Consider an experiment with outcomes defined by a certain sample space. The rule or functional relationship that maps each point in this sample space into a real number is called random variable. Random variables are designated by capital letters (e.g., X, Y, ), and a particular value of a random variable is denoted by a lowercase letter (e.g., x, y, ).

The Cumulative Distribution Function (cdf) associated with the random variable X is denoted as F X ( x), and is interpreted as the total probability that the random variable X is less or equal to the value x. More precisely, F X ( x) Pr{ X x} (3.59) The probability that the random variable X is in the interval ( x, x ) is then given by F X ( x ) F X ( x ) Pr{ x X x } The cdf has the following properties: 0 F X ( x) F X ( ) 0 F X ( ) F X ( x ) F X ( x ) x x (3.60) (3.6) It is often practical to describe a random variable by the derivative of its cdf, which is called the Probability Density Function (pdf). The pdf of the random variable X is or, equivalently, f X ( x) d FX ( x) dx (3.6) F X ( x) Pr{ X x} f X ( λ) dλ (3.63) The probability that a random variable X has values in the interval ( x, x ) is x F X ( x ) F X ( x ) Pr{ x X x } f X ( x) dx x (3.64) Define the nth moment for the random variable X as EX [ n ] X n x n f X ( x) dx (3.65) The first moment, EX [ ], is called the mean value, while the second moment, EX [ ], is called the mean squared value. When the random variable X x

represents an electrical signal across a Ω resistor, then EX [ ] is the DC component, and EX [ ] is the total average power. The nth central moment is defined as E[ ( X X) n ] ( X X) n ( x x) n f X ( x) dx (3.66) and thus, the first central moment is zero. The second central moment is called the variance and is denoted by the symbol σ X, σ X ( X X) (3.67) Appendix E has some common pdfs and their means and variances. In practice, the random nature of an electrical signal may need to be described by more than one random variable. In this case, the joint cdf and pdf functions need to be considered. The joint cdf and pdf for the two random variables X and Y are, respectively, defined by F XY ( x, y) Pr{ X x; Y y} f XY ( x, y) The marginal cdfs are obtained as follows: FXY ( xy, ) x y (3.68) (3.69) x F X ( x) f UV ( uv, ) dudv y F Y ( y) f UV ( uv, ) dvdu F XY ( x, ) F XY (, y) (3.70) If the two random variables are statistically independent, then the joint cdfs and pdfs are, respectively, given by F XY ( x, y) F X ( x)f Y ( y) f XY ( x, y) f X ( x)f Y ( y) (3.7) (3.7) Let us now consider a case when the two random variables X and Y are mapped into two new variables U and V through some transformations T and defined by T

U V T ( XY, ) T ( X, Y) (3.73) The joint pdf, f UV ( uv, ), may be computed based on the invariance of probability under the transformation. One must first compute the matrix of derivatives; then the new joint pdf is computed as f UV ( uv, ) f XY ( xy, ) J (3.74) where the determinant of the matrix of derivatives J x u y u (3.75) is called the Jacobian. The characteristic function for the random variable X is defined as (3.76) The characteristic function can be used to compute the pdf for a sum of independent random variables. More precisely, let the random variable Y be equal to x v y v C X ( ω) Ee [ jωx ] f X ( x)e jωx dx J Y X + X + + X N (3.77) where { X i ; i, N} shown that is a set of independent random variables. It can be C Y ( ω) C X ( ω)c X ( ω) C XN ( ω) (3.78) and the pdf f Y ( y) is computed as the inverse Fourier transform of C Y ( ω) (with the sign of y reversed), f Y ( y) π ----- C ( ω )e jωy d Y ω The characteristic function may also be used to compute the the random variable X as nth (3.79) moment for

EX [ n ] ( j) n n d CX ( ω) dω n ω 0 (3.80) 3.7. Multivariate Gaussian Distribution Consider a joint probability for m random variables, X, X,, X m. These variables can be represented as components of an m random column vector, X. More precisely, X t X X X m (3.8) where the superscript indicates the transpose operation. The joint pdf for the vector X is f x ( x) f x, x,, x m ( x, x,, x m ) The mean vector is defined as µ x EX [ ] EX [ ] EX [ m ] t (3.8) (3.83) and the covariance is an m m matrix given by C x EX [ X t t ] µ x µ x (3.84) Note that if the elements of the vector X are independent, then the covariance matrix is a diagonal matrix. By definition a random vector X is multivariate Gaussian if its pdf has the form f x ( x) ( π) m [ C x ] -- ( x µ x ) t exp C x ( x µ x ) (3.85) where µ x is the mean vector, C x is the covariance matrix, C x is inverse of the covariance matrix and C x is its determinant, and X is of dimension m. If A is a k m matrix of rank k, then the random vector Y AX is a k-variate Gaussian vector with µ y Aµ x (3.86) and C y AC x A t (3.87) The characteristic function for a multivariate Guassian pdf is defined by

C X E[ exp{ j( ω X + ω X + + ω m X m )}] (3.88) exp jµ t x ω --ωt C x ω Then the moments for the joint distribution can be obtained by partial differentiation. For example, EX [ X X 3 ] 3 C ω ω ω X ( ω, ω, ω 3 ) at ω 0 3 (3.89) Example 3.3: The vector X is a 4-variate Gaussian with µ x 0 t 63 C 343 x 343 33 Define X X X X 3 X X 4 Find the distribution of X and the distibution of Y X X + X X 3 + X 4 Solution: X has a bivariate Gaussian distribution with µ x C x 63 34 The vector Y can be expressed as

Y X 000 X 00 X 00 3 X 4 AX It follows that µ y Aµ x 44 t C y AC x A 4 34 3 t 4 4 6 6 3 3 3.8. Random Processes A random variable X is by definition a mapping of all possible outcomes of a random experiment to numbers. When the random variable becomes a function of both the outcomes of the experiment as well as time, it is called a random process and is denoted by Xt (). Thus, one can view a random process as an ensemble of time domain functions that are the outcome of a certain random experiment, as compared to single real numbers in the case of a random variable. Since the cdf and pdf of a random process are time dependent, we will denote them as F X ( xt ; ) and f X ( xt ; ), respectively. The nth moment for the random process Xt () is EX [ n () t ] x n f X ( xt ; ) dx (3.90) A random process Xt () is referred to as stationary to order one if all its statistical properties do not change with time. Consequently, EXt [ ()] X, where X is a constant. A random process Xt () is called stationary to order two (or wide sense stationary) if f X ( x, x ; t, t ) f X ( x, x ; t + t, t + t) (3.9) for all t, t and t.

Define the statistical autocorrelation function for the random process as Xt () R X ( t, t ) EXt [ ( )Xt ( )] (3.9) The correlation EXt [ ( )Xt ( )] is, in general, a function of ( t, t ). As a consequence of the wide sense stationary definition, the autocorrelation function depends on the time difference τ t t, rather than on absolute time; and thus, for a wide sense stationary process we have EXt [ ()] X R X ( τ) EXt [ ()Xt ( + τ) ] (3.93) If the time average and time correlation functions are equal to the statistical average and statistical correlation functions, the random process is referred to as an ergodic random process. The following is true for an ergodic process: lim T T T -- xt () dt T EXt [ ()] X (3.94) lim T T T -- x ()xt t ( + τ) dt T R X ( τ) The covariance of two random processes Xt () and Yt () is defined by (3.95) C XY ( t, t + τ) E[ { X() t EXt [ ()]}{ Yt ( + τ) EYt [ ( + τ) ]}] which can be written as C XY ( t, t+ τ) R XY ( τ) XY (3.96) (3.97) 3.9. Sampling Theorem Most modern communication and radar systems are designed to process discrete samples of signals bearing information. In general, we would like to determine the necessary condition such that a signal can be fully reconstructed from its samples by filtering, or data processing in general. The answer to this question lies in the sampling theorem which may be stated as follows: let the signal xt () be real-valued and band-limited with bandwidth B ; this signal can be fully reconstructed from its samples if the time interval between samples is no greater than ( B).

Fig. 3. illustrates the sampling process concept. The sampling signal pt () is periodic with period T s, which is called the sampling interval. The Fourier series expansion of pt () is The sampled signal x s () t pt () P n e n is then given by jπnt ------------- T s (3.98) x s () t xt ()P n e n Taking the FT of Eq. (3.99) yields jπnt ------------- T s (3.99) X s ( ω) P n X ω πn --------- n T s where X( ω) is the FT of xt (). P 0 X( ω) + P n X ω πn --------- n n 0 T s (3.00) xt () x s () t P 0 xt () LPF X( ω) 0 for ω > πb pt () Figure 3.. Concept of sampling. Therefore, we conclude that the spectral density, X s ( ω), consists of replicas of X( ω) spaced ( π T s ) apart and scaled by the Fourier series coefficients P n. A Low Pass Filter (LPF) of bandwidth B can then be used to recover the original signal xt ().

When the sampling rate is increased (i.e., T s decreases), the replicas of X( ω) move farther apart from each other. Alternatively, when the sampling rate is decreased (i.e., T s increases), the replicas get closer to one another. The value of T s such that the replicas are tangent to one another defines the minimum required sampling rate so that xt () can be recovered from its samples by using an LPF. It follows that (3.0) The sampling rate defined by Eq. (3.0) is known as the Nyquist sampling rate. When T s > ( B), the replicas of X( ω) overlap and thus, xt () cannot be recovered cleanly from its samples. This is known as aliasing. In practice, ideal LPF cannot be implemented; hence, practical systems tend to over-sample in order to avoid aliasing. Example 3.4: Assume that the sampling signal Compute an expression for X s ( ω). Solution: The signal series is π ----- π( B) T T s s pt () ------ B pt () pt () δt ( nt s ) n is given by is called the Comb function. Its exponential Fourier It follows that pt () n ----e T s πnt ----------- T s x s () t xt ()----e T s n πnt ----------- Taking the Fourier transform of this equation yields T s X s ( ω) π ----- X ω πn ---------. T s n T s

Before proceeding to the next section, we will establish the following notation: samples of the signal xt () are denoted by xn ( ) and referred to as a discrete time domain sequence, or simply a sequence. If the signal xt () is periodic, we will denote its sample by the periodic sequence x( n). 3.0. The Z-Transform The Z-transform is a transformation that maps samples of a discrete time domain sequence into a new domain known as the z-domain. It is defined as Z{ x( n) } Xz ( ) xn ( )z n n (3.0) where z, and for most cases, r. It follows that Eq. (3.0) can be rewritten as (3.03) In the z-domain, the region over which Xz ( ) is finite is called the Region of Convergence (ROC). Appendix D has a list of most common Z-transform pairs. The Z-transform properties are (the proofs are left as an exercise):. Linearity: re jω ( ) xn ( )e jnω Xe jω n. Right-Shifting Property: 3. Left-Shifting Property: 4. Time Scaling: Z{ ax ( n) + bx ( n) } ax ( z) + bx ( z) Z{ x( n k) } z k Xz ( ) k Z{ x( n+ k) } z k Xz ( ) xn ( )z k n n 0 Z{ a n xn ( )} Xa ( z) ( a z) n xn ( ) n 0 (3.04) (3.05) (3.06) (3.07)

5. Periodic Sequences: Z{ x( n) } z N -------------Z{ x( n) } z N (3.08) where N is the period. 6. Multiplication by n : Z{ nx( n) } d z Xz ( ) dz 7. Division by n+ a; a is a real number: (3.09) Z xn ( ) ----------- n+ a n 0 xn ( )z a u k z 0 a du (3.0) 8. Initial Value: 9. Final Value: 0. Convolution: xn ( 0 ) z n 0Xz ( ) z lim xn ( ) lim ( z )Xz ( ) n z (3.) (3.) Z h( n k)xk ( ) k 0 Hz ( )Xz ( ) (3.3). Bilateral Convolution: Z h( n k)xk ( ) k Hz ( )Xz ( ) Example 3.5: Prove Eq. (3.09). Solution: Starting with the definition of the Z-transform, (3.4)

Xz ( ) x ( n)z n n Taking the derivative, with respect to z, of the above equation yields d Xz ( ) xn ( )( n) dz n z n It follows that z ( ) nx( n)z n n Z{ nx( n) } ( z) d Xz ( ) dz In general, a discrete LTI system has a transfer function Hz ( ) which describes how the system operates on its input sequence xn ( ) in order to produce the output sequence yn ( ). The output sequence yn ( ) is computed from the discrete convolution between the sequences xn ( ) and hn ( ), yn ( ) xm ( )hn ( m) m However, since practical systems require that the sequence length, we can rewrite Eq. (3.5) as xn ( ) (3.5) be of finite N yn ( ) xm ( )hn ( m) m 0 (3.6) where N denotes the input sequence length. Taking the Z-transform of Eq. (3.6) yields Yz ( ) Xz ( )Hz ( ) and the discrete system transfer function is (3.7) Hz ( ) ---------- Yz ( ) Xz ( ) (3.8)

Finally, the transfer function Hz ( ) can be written as Hz ( ) jω z e ( ) e Hejω He jω ( ) (3.9) where ( ) is the amplitude response, and He ( jω ) is the phase response. He jω 3.. The Discrete Fourier Transform The Discrete Fourier Transform (DFT) is a mathematical operation that transforms a discrete sequence, usually from the time domain into the frequency domain, in order to explicitly determine the spectral information for the sequence. The time domain sequence can be real or complex. The DFT has finite length N, and is periodic with period equal to N. The discrete Fourier transform for the finite sequence xn ( ) is defined by N X ( k) xn ( )e ; k 0,, N n 0 The inverse DFT is given by jπnk -------------- N (3.0) x ( n) N jπnk -------------- N --- X ( k)e N ; n 0,, N k 0 (3.) The Fast Fourier Transform (FFT) is not a new kind of transform different from the DFT. Instead, it is an algorithm used to compute the DFT more efficiently. There are numerous FFT algorithms that can be found in the literature. In this book we will interchangeably use the DFT and the FFT to mean the same. Furthermore, we will assume radix- FFT algorithm, where the FFT size is equal to N m for some integer m. 3.. Discrete Power Spectrum Practical discrete systems utilize DFTs of finite length as a means of numerical approximation for the Fourier transform. It follows that input signals must be truncated to a finite duration (denoted by T ) before they are sampled. This is necessary so that a finite length sequence is generated prior to signal processing. Unfortunately, this truncation process may cause some serious problems.

To demonstrate this difficulty, consider the time domain signal xt () sinπf 0 t. The spectrum of xt () consists of two spectral lines at ± f 0. Now, when xt () is truncated to length T seconds and sampled at a rate T s T N, where N is the number of desired samples, we produce the sequence { xn ( ) ; n 0,,, N }. The spectrum of xn ( ) would still be composed of the same spectral lines if T is an integer multiple of T s and if the DFT frequency resolution f is an integer multiple of f 0. Unfortunately, those two conditions are rarely met and as a consequence, the spectrum of xn ( ) spreads over several lines (normally the spread may extend up to three lines). This is known as spectral leakage. Since f 0 is normally unknown, this discontinuity caused by an arbitrary choice of T cannot be avoided. Windowing techniques can be used to mitigate the effect of this discontinuity by applying smaller weights to samples close to the edges. A truncated sequence xn ( ) can be viewed as one period of some periodic sequence x( n) with period N. The discrete Fourier series expansion of x( n) is N xn ( ) X k e k 0 It can be shown that the coefficients X k jπnk -------------- N are given by (3.) N jπnk ----------------- N X k --- xn ( )e N n 0 ---X( k) N (3.3) where Xk ( ) is the DFT of xn ( ). Therefore, the Discrete Power Spectrum (DPS) for the band limited sequence xn ( ) is the plot of X k versus k, where the lines are f apart, P 0 ----- X( 0) N P k ----- { Xk ( ) + XN ( k) } ; k,,, N --- N P N N ----- XN ( ) (3.4) Before proceeding to the next section, we will show how to select the FFT parameters. For this purpose, consider a band limited signal xt () with bandwidth B. If the signal is not band limited, a LPF can be used to eliminate

frequencies greater than B. In order to satisfy the sampling theorem, one must choose a sampling frequency f s T s, such that f s B (3.5) The truncated sequence duration T and the total number of samples N are related by T NT s (3.6) or equivalently, f s N --- T (3.7) It follows that and the frequency resolution is f s N --- B T (3.8) f f s -------- --- NT s N -- T B ------ N (3.9) 3.3. Windowing Techniques Truncation of the sequence product, xn ( ) can be accomplished by computing the where x w ( n) xn ( )wn ( ) (3.30) wn ( ) fn ( ) ; n 0,,, N 0 otherwise (3.3) where f( n). The finite sequence wn ( ) is called a windowing sequence, or simply a window. The windowing process should not impact the phase response of the truncated sequence. Consequently, the sequence wn ( ) must retain linear phase. This can be accomplished by making the window symmetrical with respect to its central point. If f( n) for all n we have what is known as the rectangular window. It leads to the Gibbs phenomenon which manifests itself as an overshoot and a

ripple before and after a discontinuity. Fig. 3. shows the amplitude spectrum of a rectangular window. Note that the first side lobe is about 3.46dB below the main lobe. Windows that place smaller weights on the samples near the edges will have lesser overshoot at the discontinuity points (lower side lobes); hence, they are more desirable than a rectangular window. However, side lobes reduction is offset by a widening of the main lobe (loss of resolution). Therefore, the proper choice of a windowing sequence is continuous trade-off between side lobe reduction and main lobe widening. The multiplication process defined in Eq. (3.3) is equivalent to cyclic convolution in the frequency domain. It follows that X w ( k) is a smeared (distorted) version of Xk ( ). To minimize this distortion, we would seek windows that have a narrow main lobe and small side lobes. Additionally, using a window other than a rectangular window reduces the power by a factor, where P w N P w --- w ( n) N n 0 It follows that the DPS for the sequence N k 0 x w ( n) Wk ( ) is now given by (3.3) 0-0 -0 0*log(amplitude) -30-40 -50-60 -70-80 0 40 60 80 00 0 sample number Figure 3.. Normalized amplitude spectrum for rectangular window.

w P 0 ------------ X( 0) P w N w P k ------------ { P w N Xk ( ) + XN ( k) } ; k,,, N --- w P N ------------ XN ( ) P w N (3.33) where P w is defined in Eq. (3.33). Table 3. lists some common windows. Figs. 3.3 through 3.5 show the frequency domain characteristics for these windows. TABLE 3.. Some common windows. n 0, N. Window rectangular Hamming Expression First side lobe wn ( ) 3.46dB πn wn ( ) 0.54 0.46 cos ------------ N 4dB Main lobe width Hanning πn wn ( ) 0.5 cos ------------ N 3dB Kaiser wn ( ) I 0 I 0 [ β ( n N) ] ----------------------------------------------- I 0 ( β) is the zero-order modified Bessel function of the first kind 46dB for β π 5 for β π

0-0 -0 0*log(amplitude) -30-40 -50-60 -70-80 0 40 60 80 00 0 sample number Figure 3.3. Normalized amplitude spectrum for Hamming window. 0-0 -0 0*log(amplitude) -30-40 -50-60 -70-80 -90 0 40 60 80 00 0 sample number Figure 3.4. Normalized amplitude spectrum for Hanning window.

0-0 -0 0*log(amplitude) -30-40 -50-60 -70-80 0 40 60 80 00 0 sample number Figure 3.5. Normalized amplitude spectrum for Kaiser window (parameter π ). Problems 3.. Classify each of the following signals as an energy signal, as a power signal, or as neither. (a) exp( 0.5t) ( t 0) ; (b) exp( 0.5t) ( t 0) ; (c) cost + cost ( < t < ) ; (d) e at ( a > 0). 3.. Compute the energy associated with the signal xt () ARect( t τ). 3.3. (a) Prove that ϕ () t and ϕ () t, shown in Fig. P3.3, are orthogonal over the interval ( t ). (b) Express the signal xt () t as a weighted sum of ϕ () t and ϕ () t over the same time interval. 3.4. A periodic signal x p () t is formed by repeating the pulse xt () ( ( t 3) 5) every 0 seconds. (a) What is the Fourier transform of xt ().(b) Compute the complex Fourier series of x p () t? (c) Give an expression for the autocorrelation function Sx p ( ω). Rx p () t and the power spectrum density

ϕ () t ϕ () t t - -.5.5 t - - 3.5. If the Fourier series is Figure P3.3 xt () X n e jπnt T n define yt () xt ( t 0 ). Compute an expression for the complex Fourier series expansion of yt (). 3.6. Show that (a) Rx( t) Rx () t. (b) If xt () ft () + m and yt () gt () + m, then Rxy() t m m, where the average values for f() t and gt () are zeroes. 3.7. What is the power spectral density for the signal xt () Acos( πf 0 t + θ 0 ) 3.8. A certain radar system uses linear frequency modulated waveforms of the form t xt () Rect - τ cos ω t + µ --- t 0 What are the quadrature components? Give an expression for both the modulation and instantaneous frequencies. 3.9. Consider the signal xt () Rect( t τ) cos( ω 0 t Bt τ) and let τ 5µs and B 0MHz. What are the quadrature components? 3.0. Determine the quadrature components for the signal ω 0 ω d ht () δt () ----- t e sinω 0 t ut ().

3.. If xt () x () t x ( t 5) + x ( t 0), determine the autocorrelation functions () t and R x () t when x () t exp( t ). R x 3.. Write an expression for the autocorrelation function R y () t, where yt () Y n Rect t ------------- n5 n and { Y n } { 0.80.8,,,, }. Give an expression for the density function S y ( ω). 3.3. Derive Eq. (3.5). 3.4. An LTI system has impulse response 5 ht () exp( t) t 0 0 t < 0 (a) Find the autocorrelation function R h ( τ). (b) Assume the input of this system is xt () 3cos( 00t). What is the output? 3.5. Suppose you want to determine an unknown DC voltage v dc in the presence of additive white Gaussian noise nt () of zero mean and variance σ n. The measured signal is xt () v dc + nt (). An estimate of v dc is computed by making three independent measurements of xt () and computing the arithmetic mean, v dc ( x + x + x 3 ) 3. (a) Find the mean and variance of the random ) variable v dc. (b) Does the estimate of v dc get better by using ten measurements instead of three? Why? 3.6. Consider the network shown in Fig. P3.6, where xt () is a random voltage with zero mean and autocorrelation function R x ( τ) + exp( at). Find the power spectrum S x ( ω). What is the transfer function? Find the power spectrum S v ( ω). )

x(t) L R C + - v(t) Figure P3.6. 3.7. (a) A random voltage vt () has an exponential distribution function f V ( v) aexp( av) where ( a > 0) ;( 0 v < ). The expected value EV [ ] 0.5. Determine Pr{ V > 0.5}. 3.8. Assume the X and Y miss distances of darts thrown at a bulls-eye dart board are Gaussian with zero mean and variance. (a) Determine the probability that a dart will fall between 0.8σ and.σ. (b) Determine the radius of a circle about the bulls-eye that contains 80% of the darts thrown. (c) Consider a square with side s in the first quadrant of the board. Determine s so that the probability that a dart will fall within the square is 0.07. 3.9. Let SX( ω) be the PSD function for the stationary random process Xt (). Compute an expression for the PSD function of Yt () Xt () X( t T). 3.0. Let X be a random variable with σ f X ( x) -- t 3 e t t 0 σ 0 elsewhere (a) Determine the characteristic function C X ( ω). (b) Using C X ( ω), validate that f X ( x) is a proper pdf. (c) Use C X ( ω) to determine the first two moments of X. (d) Calculate the variance of X. 3.. Let Xt () be a stationary random process, EXt [ ()] and the autocorrelation R x ( τ) 3 + exp( τ ). Define a new random variable Y 0 x() t dt

Compute EYt [ ()] and σ Y. 3.. In Fig. 3., let Give an expression for X s ( ω). pt () ARect t ------------- nt τ n 3.3. Compute the Z-transform for (a) x ( n) ----u( n) ; (b) x. n! ( n) ------------u( n) ( n)! 3.4. (a) Write an expression for the Fourier transform of xt () Rect( t 3) (b) Assume that you want to compute the modulus of the Fourier transform using a DFT of size 5 with a sampling interval of second. Evaluate the modulus at frequency ( 80 5)Hz. Compare your answer to the theoretical value and compute the error. 3.5. A certain band-limited signal has bandwidth B 0KHz. Find the FFT size required so that the frequency resolution is f 50Hz. Assume radix FFT and record length of second. 3.6. Assume that a certain sequence is determined by its FFT. If the record length is ms and the sampling frequency is f s 0KHz, find N.