Chapter 5 Random Variables and Processes

Similar documents
5.9 Power Spectral Density Gaussian Process 5.10 Noise 5.11 Narrowband Noise

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

7 The Waveform Channel

Name of the Student: Problems on Discrete & Continuous R.Vs

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

P 1.5 X 4.5 / X 2 and (iii) The smallest value of n for

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Name of the Student: Problems on Discrete & Continuous R.Vs

Stochastic Processes

Statistical signal processing

Probability and Statistics

Chapter 6. Random Processes

Introduction to Probability and Stochastic Processes I

TSKS01 Digital Communication Lecture 1

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

Multiple Random Variables

Problems on Discrete & Continuous R.Vs

Random Processes Why we Care

Fundamentals of Noise

Communication Systems Lecture 21, 22. Dong In Kim School of Information & Comm. Eng. Sungkyunkwan University

PROBABILITY AND RANDOM PROCESSESS

ECE6604 PERSONAL & MOBILE COMMUNICATIONS. Week 3. Flat Fading Channels Envelope Distribution Autocorrelation of a Random Process

G.PULLAIAH COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING PROBABILITY THEORY & STOCHASTIC PROCESSES

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

3F1 Random Processes Examples Paper (for all 6 lectures)

Chapter 6: Random Processes 1

Question Paper Code : AEC11T03

Square Root Raised Cosine Filter

16.584: Random (Stochastic) Processes

Problem Sheet 1 Examples of Random Processes

Introduction...2 Chapter Review on probability and random variables Random experiment, sample space and events

ECE-340, Spring 2015 Review Questions

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Stochastic Processes. A stochastic process is a function of two variables:

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

EE4601 Communication Systems

Signals and Spectra - Review

STOCHASTIC PROBABILITY THEORY PROCESSES. Universities Press. Y Mallikarjuna Reddy EDITION

Appendix A PROBABILITY AND RANDOM SIGNALS. A.1 Probability

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

Module 4. Signal Representation and Baseband Processing. Version 2 ECE IIT, Kharagpur

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

Probability and Statistics for Final Year Engineering Students

Module 9: Stationary Processes

Communication Theory II

Lecture Notes 7 Stationary Random Processes. Strict-Sense and Wide-Sense Stationarity. Autocorrelation Function of a Stationary Process

Lecture 15. Theory of random processes Part III: Poisson random processes. Harrison H. Barrett University of Arizona

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. (a) What is gaussian random variable? Develop an equation for guassian distribution

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Analysis and Design of Analog Integrated Circuits Lecture 14. Noise Spectral Analysis for Circuit Elements

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture 2: Repetition of probability theory and statistics

5 Analog carrier modulation with noise

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

conditional cdf, conditional pdf, total probability theorem?

1: PROBABILITY REVIEW

Formulas for probability theory and linear models SF2941

Chapter 2 Random Processes

Probability theory. References:

Fourier Analysis Linear transformations and lters. 3. Fourier Analysis. Alex Sheremet. April 11, 2007

MATHEMATICAL TOOLS FOR DIGITAL TRANSMISSION ANALYSIS

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Recitation 2: Probability

ELEMENTS OF PROBABILITY THEORY

Power Spectral Density of Digital Modulation Schemes

Deterministic. Deterministic data are those can be described by an explicit mathematical relationship

Lecture - 30 Stationary Processes

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Algorithms for Uncertainty Quantification

1. Fundamental concepts

EE303: Communication Systems

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

EE401: Advanced Communication Theory

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs

Stochastic Processes. Monday, November 14, 11

PROBABILITY THEORY. Prof. S. J. Soni. Assistant Professor Computer Engg. Department SPCE, Visnagar

ECE 636: Systems identification

Chapter 4 Random process. 4.1 Random process

Statistics for scientists and engineers

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Definition of a Stochastic Process

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

Discrete Random Variables

Chapter Review of of Random Processes

PROBABILITY AND RANDOM VARIABLES

ENSC327 Communications Systems 2: Fourier Representations. Jie Liang School of Engineering Science Simon Fraser University

Properties of the Autocorrelation Function

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Signal Design for Band-Limited Channels

4 Classical Coherence Theory

Chapter 6 - Random Processes

ECE 450 Homework #3. 1. Given the joint density function f XY (x,y) = 0.5 1<x<2, 2<y< <x<4, 2<y<3 0 else

Transcription:

Chapter 5 Random Variables and Processes Wireless Information Transmission System Lab. Institute of Communications Engineering National Sun Yat-sen University

Table of Contents 5.1 Introduction 5. Probability 5.3 Random Variables 5.4 Statistical Averages 5.5 Random Processes 5.6 Mean, Correlation and Covariance Functions 5.7 Transmission of a Random Process through a Linear Filter 5.8 Power Spectral Density 5.9 Power Spectral Density 5.9 Gaussian Process 5.10 Noise 5.11 Narrowband Noise

5.1 Introduction Fourier transform is a mathematical tool for the representation of deterministic signals. Deterministic signals: the class of signals that may be modeled as completely specified functions of time. A signal is random if it is not possible to predict its precise value in advance. A random process consists of an ensemble (family) of sample functions, each of which varies randomly with time. A random variable is obtained by observing a random process at a fixed instant of time. 3

5. Probability Probability theory is rooted in phenomena that, explicitly or implicitly, can be modeled by an experiment with an outcome that is subject to chance. Example: Experiment may be the observation of the result of tossing a fair coin. In this experiment, the possible outcomes of a trial are heads or tails. If an experiment has K possible outcomes, then for the kth possible outcome we have a point called the sample point, which we denote by s k. With this basic framework, we make the following definitions: The set of all possible outcomes of the experiment is called the sample space, which we denote by S. An event corresponds to either a single sample point or a set of sample points in the space S. 4

5. Probability A single sample point is called an elementary event. The entire sample space S is called the sure event; and the null set φ is called the null or impossible event. Two events are mutually exclusive if the occurrence of one event precludes the occurrence of the other event. A probability measure P is a function that assigns a non-negative number to an event A in the sample space S and satisfies the following three properties (axioms): 1.. P 3. P[ A] ( ) [ S] = 1 ( 5.) 0 1 5.1 If A and B are two mutually exclusive events, then [ A B] = [ A] + [ B] ( 5.3) P P P 5

5. Probability 6

5. Probability The following properties of probability measure P may be derived from the above axioms: [ A] ( ) 1. P A = 1 P 5.4. When events A and B are not mutually exclusive: 3. If [ A B] = [ A] + [ B] [ A B] ( 5.5) P P P P AA,,..., A are mutually exclusive events that include all 1 m possible outcomes of the random experiment, then [ A ] + [ A ] + + [ A ] = 1 ( 5.6) P P P 1 m 7

5. Probability Let P[B A] denote the probability of event B, given that event A has occurred. The probability P[B A] is called the conditional probability of B given A. P[B A] is defined by Bayes rule P P BA = [ A B] P[ A] ( 5.7) We may write Eq.(5.7) as P[A B] = P[B A]P[A] (5.8) It is apparent that we may also write P[A B] = P[A B]P[B] (5.9) From Eqs.(5.8) and (5.9), provided P[A] 0, we may determine P[B A] by using the relation P BA = P AB P P[ B] [ A] ( 5.10) 8

5. Conditional Probability Suppose that the condition probability P[B A] is simply equal to the elementary probability of occurrence of event B, that is P BA = P [ B] P [ A B ] = P [ A ] P [ B ] so that [ ] P[ B] [ ] [ ] P[ B] P A B P A P B P AB = = = P [ A] ( 5.13) Events A and B that satisfy this condition are said to be statistically independent. 9

5. Conditional Probability Example 5.1 Binary Symmetric Channel This channel is said to be discrete in that it is designed to handle discrete messages. The channel is memoryless in the sense that the channel output at any time depends only on the channel input at that time. The channel is symmetric, which means that the probability of receiving symbol 1 when 0 is sent is the same as the probability of receiving symbol 0 when symbol 1 is sent. 10

5. Conditional Probability Example 5.1 Binary Symmetric Channel (continued) The a priori probabilities of sending binary symbols 0 and 1: P [ ] [ ] A0 = p0 P A = The conditional probabilities of error: p 1 1 P B A = P B A = p 1 0 0 1 The probability of receiving symbol 0 is given by: [ ] [ ] [ ] ( 1 ) P B = P B A P A + P B A P A = p p + pp 0 0 0 0 0 1 1 0 1 The probability of receiving symbol 1 is given by: [ ] = [ ] + [ ] = + ( 1 ) P B P B A P A P B A P A pp p p 1 1 0 0 1 1 1 0 1 11

5. Conditional Probability Example 5.1 Binary Symmetric Channel (continued) The a posteriori probabilities P[A 0 B 0 ] and P[A 1 B 1 ]: P [ ] [ ] ( 1 ) ( 1 ) P B0 A0 A0 p p0 A0 B P 0 = = P B p p + pp 0 0 1 P [ ] [ ] ( 1 ) + ( 1 ) P B1 A1 A1 p p1 A1 B P 1 = = P B pp p p 1 0 1 1

5.3 Random Variables We denote the random variable as (s) or just. is a function. Random variable may be discrete or continuous. Consider the random variable and the probability of the event x. We denote this probability by P[ x]. To simplify our notation, we write ( ) = P[ ] ( 5.15) F x x The function F (x) is called the cumulative distribution function (cdf) or simply the distribution function of the random variable. The distribution function F (x) has the following properties: Fx ( x) ( ) ( ) 1. 0 1. F x F x if x < x x 1 x 1 13

5.3 Random Variables There may be more than one random variable associated with the same random experiment. 14

5.3 Random Variables If the distribution function is continuously differentiable, then d f x F x dx ( ) = ( ) ( 5.17) f (x) is called the probability density function (pdf) of the random variable. Probability of the event x 1 < x equals [ x < x ] = [ x ] [ x ] P P P 1 1 1 x x1 = ( ) ( ) ( ) ( ) ( ) = F ξ ξ 5.19 x F x1 F x = f d = x x f ( ) x dx Probability density function must always be a nonnegative function, and with a total area of one. 15

5.3 Random Variables Example 5. Uniform Distribution 0, x a 1 f ( x) =, a< x b b a 0, x> b 0, x a x a F ( x) =, a< x b b a 0, x> b 16

5.3 Random Variables Several Random Variables Consider two random variables and Y. We define the joint distribution function F,Y (x,y) as the probability that the random variable is less than or equal to a specified value x and that the random variable Y is less than or equal to a specified value y. ( ) = P[ ] ( ) FY, xy, xy, y 5.3 Suppose that joint distribution function F,Y (x,y) is continuous everywhere, and that the partial derivative f Y, ( xy) F xy exists and is continuous everywhere. We call the function f,y (x,y) the joint probability density function of the random variables and Y. 17 ( xy, ) Y,, = 5.4 ( )

5.3 Random Variables Several Random Variables The joint distribution function F,Y (x,y) is a monotonenondecreasing function of both x and y. ( ξη) f,y, dξdη= 1 Marginal density f (x) x ( ) = ( ξη, ) ξ η ( ) = (, η) η ( 5.7) F x f d d f x f x d Y, Y, Suppose that and Y are two continuous random variables with joint probability density function f,y (x,y). The conditional probability density function of Y given that = x is defined by f Y ( yx) = f ( xy) ( x) Y,, f ( 5.8) 18

5.3 Random Variables Several Random Variables If the random variable and Y are statistically independent, then knowledge of the outcome of can in no way affect the distribution of Y. ( ) by( 5.8) = ( ) ( ) = ( ) ( ) ( ) fy y x fy y fy, x, y f x fy y 5.3 [ x A, Y B] = [ A] [ Y B] ( 5.33) P P P 19

5.3 Random Variables Example 5.3 Binomial Random Variable Consider a sequence of coin-tossing experiments where the probability of a head is p and let n be the Bernoulli random variable representing the outcome of the nth toss. Let Y be the number of heads that occur on N tosses of the coins: Y N = n= 1 n P N y y [ Y = y] = p ( 1 p) N N! y = y! ( N y)! N y 0

5.4 Statistical Averages The expected value or mean of a random variable is defined by Function of a Random Variable µ x [ ] xf ( x) dx ( 5.36) = E = Let denote a random variable, and let g() denote a realvalued function defined on the real line. We denote as Y = ( ) ( 5.37) g E To find the expected value of the random variable Y. [ Y ] = yf ( y) dy E g ( ) = g ( x) f ( x) dx ( 5.38) Y x 1

5.4 Statistical Averages Example 5.4 Cosinusoidal Random Variable Let Y=g()=cos() is a random variable uniformly distributed in the interval (-π, π) f ( x) 1, π < x < π = π 0, otherwise E 1 Y = x dx π π π [ ] ( cos ) = = 0 1 sin π x π x= π

5.4 Statistical Averages Moments For the special case of g() = n, we obtain the nth moment of the probability distribution of the random variable ; that is n n x f x dx E = ( ) ( 5.39) Mean-square value of : The nth central moment is x f x dx E = ( ) ( 5.40) n n ( µ ) = ( x µ ) f ( x) dx ( 5.41) E 3

5.4 Statistical Averages For n = the second central moment is referred to as the variance of the random variable, written as [ ] ( ) = µ = ( x µ ) f ( x) dx ( ) var E 5.4 The variance of a random variable is commonly denoted as σ. The square root of the variance is called the standard deviation of the random variable. [ ] E ( ) σ = var = µ = E µ + µ = E µ E[ ] + µ = E µ ( 5.44) 4

5.4 Statistical Averages Chebyshev inequality Suppose is an arbitrary random variable with finite mean m x and finite variance σ x. For any positive number δ: P m x δ Proof: ( ) σ σ δ x = x x x m δ ( x m ) p( x) dx ( x m ) p( x) dx ( ) = ( ) δ p x dx δ P mx δ x m δ x x x 5

5.4 Statistical Averages Chebyshev inequality Another way to view the Chebyshev bound is working with the zero mean random variable Y=-m x. Define a function g(y) as: g ( Y ) Upper-bound g(y) by the quadratic (Y/δ), i.e. The tail probability ( Y δ ) ( Y < δ ) 1 = and E Y 0 [ g( Y )] = P( δ ) [ ( )] ( Y E Y ) g Y E = g ( Y ) Y δ σ y σ E δ = = δ δ δ x 6

5.4 Statistical Averages Chebychev inequality A quadratic upper bound on g(y) used in obtaining the tail probability (Chebyshev bound) For many practical applications, the Chebyshev bound is extremely loose. 7

5.4 Statistical Averages ( ) Characteristic function φ is defined as the expectation of the υ complex exponential function exp( jυ ), as shown by ( j ) = exp( j ) = f ( x) exp ( j ) dx ( 5.45) ψ υ E υ υ In other words, the characteristic function φ υ is the Fourier transform of the probability density function f (x). ( ) Analogous with the inverse Fourier transform: 1 f x = j j d π ( ) ψ ( υ) exp ( υ ) υ ( 5.46) 8

5.4 Statistical Averages Characteristic functions First moment (mean) can be obtained by: E( ) = m x = j dψ ( jv) dv v= 0 Since the differentiation process can be repeated, n-th moment can be calculated by: E( n ) = ( j) n d ψ ( jv) n dv n v= 0 9

30 Characteristic functions Determining the PDF of a sum of statistically independent random variables: ( ) [ ] n Y i n i Y n n n n n i jvx n i jv n i i jvy Y n i i jv jv jv jv x p x p x p x x x p dx dx dx x x x p e e E jv E e E jv Y i i i ) ( ) ( are iid (independent and identically distributed) If ) ( ) ( ) ( )... ( ) ( ),...,, ( Since the random variables are statistically independent,... ),...,, (... exp ) ( ) ( 1 1 1 1 1 1 1 1 1 ψ ψ ψ ψ ψ = = = = = = = = = = = = = 5.4 Statistical Averages

5.4 Statistical Averages Characteristic functions The PDF of Y is determined from the inverse Fourier transform of Ψ Y (jv). Since the characteristic function of the sum of n statistically independent random variables is equal to the product of the characteristic functions of the individual random variables, it follows that, in the transform domain, the PDF of Y is the n- fold convolution of the PDFs of the i. Usually, the n-fold convolution is more difficult to perform than the characteristic function method in determining the PDF of Y. 31

5.4 Statistical Averages Example 5.5 Gaussian Random Variable The probability density function of such a Gaussian random variable is defined by: 1 ( x µ ) f ( x) = exp, < x< πσ σ The characteristic function of a Gaussian random variable with mean m x and variance σ is (Problem 5.1): jvx 1 ( x m ) / σ x jvmx ( ) ( ) 1/ v σ ψ jv = e e dx e = πσ It can be shown that the central moments of a Gaussian random variable are given by: k [ ] k 1 3 ( k 1) σ (even k) E ( mx ) = µ k = 0 (odd k) 3

33 Example 5.5 Gaussian Random Variable (cont.) The sum of n statistically independent Gaussian random variables is also a Gaussian random variable. Proof: ( ) ( ). and variance is Gaussian - distributed with mean Therefore, and where 1 1 / 1 / 1 1 y y n i i y n i i y v jvm n i v jvm n i Y n i i m Y m m e e jv jv Y y y i i i σ σ σ ψ ψ σ σ = = = = = = = = = = = 5.4 Statistical Averages

5.4 Statistical Averages Joint Moments Consider next a pair of random variables and Y. A set of statistical averages of importance in this case are the joint moments, namely, the expected value of i Y k, where i and k may assume any positive integer values. We may thus write ( ) ( ) i k i k Y = x y f, x, y dxdy 5.51 E A joint moment of particular importance is the correlation defined by E[Y], which corresponds to i = k = 1. Y Covariance of and Y : [ ] [ ] ( )( [ ]) [ ] ( ) cov Y = E E Y E Y = E Y µ µ Y 5.53 34

5.4 Statistical Averages Correlation coefficient of and Y : ρ = cov [ Y ] σ σ Y ( 5.54) σ and σ Y denote the variances of and Y. We say and Y are uncorrelated if and only if cov[y] = 0. Note that if and Y are statistically independent, then they are uncorrelated. The converse of the above statement is not necessarily true. We say and Y are orthogonal if and only if E[Y] = 0. 35

5.4 Statistical Averages Example 5.6 Moments of a Bernoulli Random Variable Consider the coin-tossing experiment where the probability of a head is p. Let be a random variable that takes the value 0 if the result is a tail and 1 if it is a head. We say that is a Bernoulli random variable. 1 p x= 0 E ( ) [ ] P( ) ( ) P = x = p x= 1 k = 0 0 otherwise E E[ ] 1 k = 0 ( k ) P[ k] σ = µ = ( 0 p) ( 1 p) ( 1 p) p( 1 p) = + = 36 1 = k = k = 0 1 p + 1 p= p j k j k E j k = E j j = k p j k = p j = k 1 where the E j = k P[ = k]. k = 0

5.5 Random Processes An ensemble of sample functions. For a fixed time instant t k, { x1( tk), x( tk), xn( tk) } = ( tk, s1), ( tk, s),, ( tk, sn) 37 { } constitutes a random variable.

5.5 Random Processes At any given time instant, the value of a stochastic process is a random variable indexed by the parameter t. We denote such a process by (t). In general, the parameter t is continuous, whereas may be either continuous or discrete, depending on the characteristics of the source that generates the stochastic process. The noise voltage generated by a single resistor or a single information source represents a single realization of the stochastic process. It is called a sample function. 38

5.5 Random Processes The set of all possible sample functions constitutes an ensemble of sample functions or, equivalently, the stochastic process (t). In general, the number of sample functions in the ensemble is assumed to be extremely large; often it is infinite. Having defined a stochastic process (t) as an ensemble of sample functions, we may consider the values of the process at any set of time instants t 1 >t >t 3 > >t n, where n is any positive integer. 39 ( t ) In general, the random variables t, 1,,...,, are i i i = n characterized statistically by their joint PDF p( x, x,..., x ). t t 1 n t

5.5 Random Processes Stationary stochastic processes ( ) Consider another set of n random variables t + t, t + t i i = 1,,..., n, where t is an arbitrary time shift. These random ( ) t + t t + t t + t variables are characterized by the joint PDF p x, x,..., x. (,,..., ) (,,..., ) t t t = t + t t + t t + t 1 n 1 1 The jont PDFs of the random variables and,i = 1,,...,n, t t + t may or may not be identical. When they are identical, i.e., when p x x x p x x x for all t and all n, it is said to be stationary in the strict sense (SSS). When the joint PDFs are different, the stochastic process is non-stationary. i n i i n 40

5.5 Random Processes Averages for a stochastic process are called ensemble averages. The nth moment of the random variable is defined as : time instant E ( n ) n x p( x ) t i = ti ti t i In general, the value of the nth moment will depend on the t i if the PDF of When the process is stationary, t i dx depends on t i t ( x ) p( x ) Therefore, the PDF is independent of time, and, as a consequence, the nth moment is independent of time. p t + t = i t i i. for all t. 41

5.5 Random Processes Two random variables: ( t ), i = 1,. The correlation is measured by the joint moment: ( ) ( ) E t = 1 t x tx 1 tp x t, x 1 tdx tdx 1 t Since this joint moment depends on the time instants t 1 and t, it is denoted by R (t 1, t ). R (t 1, t ) is called the autocorrelation function of the stochastic process. t i For a stationary stochastic process, the joint moment is: E( ) = R ( t, t ) = R ( t t ) = R ( τ ) t1 t 1 1 R( τ ) = E ( ) = E ( ) = E ( ) = R( τ ) t t + τ t + τ t τ ' ' 1 1 1 1 t1 t1 i Average power in the process (t): R (0)=E( t ). 4

5.5 Random Processes Wide-sense stationary (WSS) A wide-sense stationary process has the property that the mean value of the process is independent of time (a constant) and where the autocorrelation function satisfies the condition that R (t 1,t )=R (t 1 -t ). Wide-sense stationarity is a less stringent condition than strict-sense stationarity. 43

5.5 Random Processes Auto-covariance function The auto-covariance function of a stochastic process is defined as: µ ( t ) { ( ) ( ) } 1, t = E t mt 1 1 t mt = R ( t1, t) mt ( 1) mt ( ) When the process is stationary, the auto-covariance function simplifies to: µ ( t1, t) = µ ( t1 t) = µτ ( ) = R ( τ) m For a Gaussian random process, higher-order moments can be expressed in terms of first and second moments. Consequently, a Gaussian random process is completely characterized by its first two moments. 44

5.6 Mean, Correlation and Covariance Functions Consider a random process (t). We define the mean of the process (t) as the expectation of the random variable obtained by observing the process at some time t, as shown by µ ( t) = ( t) = xf ( ) ( x) dx ( 5.57) E t A random process is said to be stationary to first order if the distribution function (and therefore density function) of (t) does not vary with time. f x f x t t t t ( ) ( ) = ( ) ( ) for all and µ ( ) = µ for all ( 5.59 t ) t1 1 The mean of the random process is a constant. The variance of such a process is also constant. 45

5.6 Mean, Correlation and Covariance Functions We define the autocorrelation function of the process (t) as the expectation of the product of two random variables (t 1 ) and (t ). ( ) E ( ) ( ) R t1, t = t1 t = x x f x x dx dx ( ) ( ) (,, ) ( 5.60 t ) 1 t 1 1 1 We say a random process (t) is stationary to second order if the joint distribution f ( ) ( ) ( x ) depends on the difference between 1 1, x t, t the observation time t 1 and t. (, ) = ( ) for all and ( 5.61) R t t R t t t t 1 1 1 The autocovariance function of a stationary random process (t) is written as ( ) E ( ) ( )( ( ) ) µ µ ( ) µ ( ) C t1, t = t1 t = R t t1 5.6 46

5.6 Mean, Correlation and Covariance Functions For convenience of notation, we redefine the autocorrelation function of a stationary process (t) as ( τ) = ( + τ) ( ) for all ( 5.63) R E t t t This autocorrelation function has several important properties: ( 0 ) = ( ) ( 5.64) ( τ) = R ( τ) ( 5.65) ( τ ) R ( 0 ) ( 5.67) 1. R E t. 3. R R Proof of (5.64) can be obtained from (5.63) by putting τ = 0. 47

5.6 Mean, Correlation and Covariance Functions Proof of (5.65): ( τ) = E ( + τ) ( ) = E ( ) ( + τ) = ( τ) R t t t t R Proof of (5.67): E ( τ ) ( t) ( t ) 0 ( τ) ( τ) ( ) ( ) ( ) R ( τ ) ( 0) ( τ ) ( 0) ( τ ) R ( 0) E t+ ± E t+ t + E t 0 R 0 ± 0 R R R R + ± 48

5.6 Mean, Correlation and Covariance Functions The physical significance of the autocorrelation function R (τ) is that it provides a means of describing the interdependence of two random variables obtained by observing a random process (t) at times τ seconds apart. 49

5.6 Mean, Correlation and Covariance Functions Example 5.7 Sinusoidal Signal with Random Phase. Consider a sinusoidal signal with random phase: 1, π θ π ( t) = Acos( π ft c +Θ ) fθ ( θ ) = π 0, elsewhere R ( τ) = E ( t+ τ) ( t) A = E A cos( 4π ft c + π fcτ + Θ ) + E cos( c ) π fτ A π 1 A = cos 4 c + c + + cos c π π A = cos( π fcτ) ( π ft π fτ θ) dθ ( π fτ) 50

5.6 Mean, Correlation and Covariance Functions Averages for joint stochastic processes Let (t) and Y(t) denote two stochastic processes and let ti (t i ), i=1,,,n, Y t j Y(t j ), j=1,,,m, represent the random variables at times t 1 >t >t 3 > >t n, and t 1 >t >t 3 > >t m, respectively. The two processes are characterized statistically by their joint PDF: The cross-correlation function of (t) and Y(t), denoted by R xy (t 1,t ), is defined as the joint moment: R ( t, t ) E( Y ) x y p( x, y ) dx dy xy 1 t t t t t t t t The cross-covariance is: (,,...,,,,..., ) t t t t t t p x x x y y y ' ' ' 1 n 1 m = = 1 1 1 1 µ ( t, t ) = R ( t, t ) m ( t ) m ( t ) xy 1 xy 1 x 1 y 51

5.6 Mean, Correlation and Covariance Functions Averages for joint stochastic processes When the process are jointly and individually stationary, we have R xy (t 1,t )=R xy (t 1 -t ), and μ xy (t 1,t )= μ xy (t 1 -t ): R ( τ ) = EY ( ) = E ( Y ) = EY ( ) = R ( τ ) xy t t + τ τ τ yx ' ' ' ' 1 1 t1 t1 t1 t1 The stochastic processes (t) and Y(t) are said to be statistically independent if and only if : p x, x,..., x, y, y,..., y ) = p( x, x,..., x ) p( y ( t 1 t ' ' ' ' ' ' tn t 1 m 1 t t y y t t t n t1 t t m,,..., ) for all choices of t i and t i and for all positive integers n and m. The processes are said to be uncorrelated if R ( t, t) = E( ) EY ( ) t, t ) 0 xy 1 t t 1 µ xy ( 1 = 5

5.6 Mean, Correlation and Covariance Functions Example 5.9 Quadrature-Modulated Processes Consider a pair of quadrature-modulated processes 1 (t) and (t): ( τ) = E ( ) ( τ) R1 1 t t 1 ( ) = ( ) cos( π c +Θ) ( ) = ( ) sin( π +Θ) t t ft t t ft ( ) ( τ) cos( π ) sin ( π π τ ) = E t t ft c +Θ ft c fc +Θ ( ) ( τ) E cos( π ) sin ( π π τ ) = E t t ft c +Θ ft c fc +Θ 1 = R ( τ) E sin( 4π ft c π fcτ ) sin ( π fcτ) + Θ 1 = R ( τ) sin( π fcτ) R1 ( 0) = E 1 ( t) ( t) = 0 53 c

5.6 Mean, Correlation and Covariance Functions Ergodic Processes In many instances, it is difficult or impossible to observe all sample functions of a random process at a given time. It is often more convenient to observe a single sample function for a long period of time. For a sample function x(t), the time average of the mean value over an observation period T is 1 T µ xt, = ( ) ( 5.84) T x t dt T For many stochastic processes of interest in communications, the time averages and ensemble averages are equal, a property known as ergodicity. This property implies that whenever an ensemble average is required, we may estimate it by using a time average. 54

5.6 Mean, Correlation and Covariance Functions Cyclostationary Processes (in the wide sense) There is another important class of random processes commonly encountered in practice, the mean and autocorrelation function of which exhibit periodicity: µ t + T = µ t for all t 1 and t. ( 1 ) ( 1) ( +, + ) = (, ) R t T t T R t t 1 1 Modeling the process (t) as cyclostationary adds a new dimension, namely, period T to the partial description of the process. 55

5.7 Transmission of a Random Process Through a Linear Filter Suppose that a random process (t) is applied as input to linear time-invariant filter of impulse response h(t), producing a new random process Y(t) at the filter output. Assume that (t) is a wide-sense stationary random process. The mean of the output random process Y(t) is given by µ Y ( t) = E Y( t) h( τ1) ( t τ1) dτ = E 1 = ( ) E ( ) h τ1 t τ1 dτ1 ( τ ) µ ( τ ) τ ( 5.86) = h t d 1 1 1 56

5.7 Transmission of a Random Process Through a Linear Filter When the input random process (t) is wide-sense stationary, the mean t is a constant, then mean t is also a constant µ. ( ) µ Y ( ) ( t) h( ) d H( ) ( ) µ µ Y µ Y = µ τ1 τ 1 = µ 0 5.87 where H(0) is the zero-frequency (dc) response of the system. The autocorrelation function of the output random process Y(t) is given by: RY ( t, u) = E Y( t) Y( u) h( τ1) ( t τ1) dτ1 h( τ) ( u τ) dτ = E ( ) ( ) E ( ) ( ) = dτ h τ dτ h τ t τ u τ 1 1 1 ( ) ( ) (, ) = dτ h τ dτ h τ R t τ u τ 1 1 1 57

5.7 Transmission of a Random Process Through a Linear Filter When the input (t) is a wide-sense stationary random process, the autocorrelation function of (t) is only a function of the difference between the observation times: Y ( τ) = ( τ ) ( τ ) ( τ τ + τ ) τ τ ( ) R h h R d d 5.90 1 1 1 If the input to a stable linear time-invariant filter is a wide-sense stationary random process, then the output of the filter is also a wide-sense stationary random process. 58

5.8 Power Spectral Density The Fourier transform of the autocorrelation function R (τ) is called the power spectral density S ( f ) of the random process (t). ( ) ( τ) exp( π τ) τ ( 5.91) S f R j f d = ( τ) ( ) exp( π τ) ( 5.9) R = S f j f df Equations (5.91) and (5.9) are basic relations in the theory of spectral analysis of random processes, and together they constitute what are usually called the Einstein-Wiener-Khintchine relations. 59

5.8 Power Spectral Density Properties of the Power Spectral Density Property 1: ( 0 ) ( τ) τ ( 5.93) S = R d Proof: Let f =0 in Eq. (5.91) Property : ( ) ( ) ( 5.94) E = t S f df Proof: Let τ =0 in Eq. (5.9) and note that R (0)=E[ (t)]. Property 3: Property 4: Proof: From (5.91) ( ) 0 for all ( 5.95) S f f ( ) = ( ) ( 5.96) S f S f ( ) ( ) ( ) ( τ) = R ( τ) 60 ( ) ( ) ( ) S f = R τ exp jπ fτ dτ = R τ exp jπ fτ dτ = S f R τ τ

Proof of Eq. (5.95) It can be shown that (see eq. 5.106) Y ( ) = ( ) ( ) S f S f H f ( τ) ( ) exp( π τ) ( ) ( ) exp( π τ) R = S f j f df = S f H f j f df Y Y ( 0) = ( ) = ( ) ( ) 0 for any ( ) RY E Y t S f H f df H f (5.64) Suppose we let H( f ) =1 for any arbitrarily small interval f 1 f f, and H( f )=0 outside this interval. Then, we have: ( ) This is possible if an only if S ( f ) 0 for all f. f f 1 S f df 0 Conclusion: S ( f ) 0 for all f. 61

5.8 Power Spectral Density Example 5.10 Sinusoidal Signal with Random Phase Consider the random process (t)=acos(πf c t+θ), where Θ is a uniformly distributed random variable over the interval (-π,π). The autocorrelation function of this random process is given in Example 5.7: A R ( τ) = cos( π fcτ) (5.74) Taking the Fourier transform of both sides of this relation: A S ( f ) = δ( f fc) + δ( f + fc) (5.97) 4 6

5.8 Power Spectral Density Example 5.1 Mixing of a Random Process with a Sinusoidal Process A situation that often arises in practice is that of mixing (i.e., multiplication) of a WSS random process (t) with a sinusoidal signal cos(πf c t+θ), where the phase Θ is a random variable that is uniformly distributed over the interval (0,π). Determining the power spectral density of the random process Y(t) defined by: ( ) = ( ) ( π +Θ) Y f t cos ft (5.101) We note that random variable Θ is independent of (t). c 63

5.8 Power Spectral Density Example 5.1 Mixing of a Random Process with a Sinusoidal Process (continued) The autocorrelation function of Y(t) is given by: RY ( τ) = E Y( t+ τ) Y( t) = E ( t+ τ) cos( π ft c + π fcτ +Θ ) ( t) cos( π ft c +Θ) = E ( t+ τ) ( t) E cos ( π ft c + π fcτ +Θ ) cos( π ft c +Θ) 1 = R ( τ) E cos( π fcτ) cos( 4π ft c π fcτ ) + + + Θ 1 = R ( τ) cos( π fcτ) Fourier transform 1 SY ( f ) = S ( f fc) + S ( f + fc) (5.103) 4 64

5.8 Power Spectral Density Relation among the Power Spectral Densities of the Input and Output Random Processes Let S Y ( f ) denote the power spectral density of the output random process Y(t) obtained by passing the random process through a linear filter of transfer function H( f ). R ( τ) = h( τ ) h( τ ) R ( τ τ + τ ) dτ dτ ( ) ( ) = ( τ) j f SY f π τ RY e d τ ( ) ( ) ( ) = h τ h τ R τ τ + τ e dτ dτ dτ = = jπ fτ 1 1 1 ( ) ( ) ( ) Let τ τ1+ τ = τ0 65 ( ) jπ f τ + τ τ 1 0 1 0 0 1 h h R e d d d τ τ τ τ τ τ jπ fτ1 jπ fτ ( ) ( ) ( ) jπ fτ0 h e d h e d R e d τ τ τ τ τ τ 1 1 0 0 ( ) ( ) ( ) ( ) ( ) ( 5.106) = H f H f S f = H f S f Y 5.90 1 1 1

5.8 Power Spectral Density Example 5.13 Comb Filter Consider the filter of Figure (a) consisting of a delay line and a summing device. We wish to evaluate the power spectral density of the filter output Y(t). 66

5.8 Power Spectral Density Example 5.13 Comb Filter (continued) The transfer function of this filter is H f = 1 exp jπ ft = 1 cos π ft + j sin π ft ( ) ( ) ( ) ( ) ( ) ( ) ( ) H f = 1 cos π ft + sin π ft = 1 cos( π ft ) = 4sin ( π ft ) Because of the periodic form of this frequency response (Fig. (b)), the filter is sometimes referred to as a comb filter. The power spectral density of the filter output is: ( ) = ( ) ( ) = 4sin ( π ) ( ) S f H f S f ft S f Y If ft is very small S ( f) 4 π f T S ( f) (5.107) Y 67

5.9 Gaussian Process A random variable Y is defined by: T ( ) ( ) Y = g t t dt ai are constants 0 i are random variables We refer to Y as a linear functional of (t). Y is a linear function of the i. If the weighting function g(t) is such that the mean-square value of the random variable Y is finite, and if the random variable Y is a Gaussian-distributed random variable for every g(t) in this class of functions, then the process (t) is said to be a Gaussian process. In other words, the process (t) is a Gaussian process if every linear functional of (t) is a Gaussian random variable. The Gaussian process has many properties that make analytic results possible. The random processes produced by physical phenomena are often such that a Gaussian model is appropriate. 68 N Y= a i i= 1 i

5.9 Gaussian Process The random variable Y has a Gaussian distribution if its probability density function has the form 1 ( y μ ) Y fy ( y) = exp πσ σ Y Y μ Y : the mean of the random variable Y σ : the variance of the random variable Y Y If the Gaussian random variable Y is normalized to have a mean of zero and a variance of one, such a normalized Gaussian distribution is commonly written as N(0,1). f Y ( y) 1 y = exp π 69

5.9 Gaussian Process Central Limit Theorem Let i, i = 1,,, N, be a set of random variables that satisfies the following requirements: The i are statistically independent. The i have the same probability distribution with mean μ and variance σ. The i so described are said to constitute a set of independent and identically distributed (i.i.d.) random variables. Define: 1 Yi = ( i μ ), i = 1,,, N. σ E [ Y ] = 0 [ ] i var Y = 1 i N 1 N i = 1 The central limit theorem states that the probability distribution of V N approaches a normalized Gaussian distribution N(0,1) in the limit as N approaches infinity. V N = Y i 70

5.9 Gaussian Process Property 1: If a Gaussian process (t) is applied to a stable linear filter, then the output random process Y(t) is also Gaussian. Property : Consider the set of random variables or samples (t 1 ), (t ), (t n ), obtained by observing a random process (t) at time t 1, t,, t n. If the process (t) is Gaussian, then this set of random variables is jointly Gaussian for any n, with their n-fold joint probability density function being completely determined by specifying the set of means: μ ( ti) = E ( ti), i = 1,,, n and the set of auto-covariance functions: C ( ) ( ( ) )( ( ) ( ) ) tk, ti = E tk μ t μ ( ),, 1,,..., k i k i = n t t i Consider the composite set of random variables (t 1 ), (t ),, (t n ), Y(u 1 ), Y(u ),, Y(u m ). We say that the processes (t) and Y(t) are jointly Gaussian if this composite set of random variables are jointly Gaussian for any n and m. 71

5.9 Gaussian Process Property 3: If a Gaussian process is wide-sense stationary, then the process is also stationary in the strict sense. Property 4: If the random variables (t 1 ), (t ), (t n ), are uncorrelated, that is E ( )( ( ) ) i ( t ) ( k) ( t ) t μ t μ = 0, i k k i then these random variables are statistically independent. The implication of this property is that the joint probability density function of the set of random variables (t 1 ), (t ),, (t n ) can be expressed as the product of the probability density functions of the individual random variables in the set. 7

5.10 Noise The sources of noise may be external to the system (e.g., atmospheric noise, galactic noise, man-made noise), or internal to the system. The second category includes an important type of noise that arises from spontaneous fluctuations of current or voltage in electrical circuits. This type of noise represents a basic limitation on the transmission or detection of signals in communication systems involving the use of electronic devices. The two most common examples of spontaneous fluctuations in electrical circuits are shot noise and thermal noise. 73

5.10 Noise Shot Noise Shot noise arises in electronic devices such as diodes and transistors because of the discrete nature of current flow in these devices. For example, in a photodetector circuit a current pulse is generated every time an electron is emitted by the cathode due to incident light from a source of constant intensity. The electrons are naturally emitted at random times denote by τ k. If the random emissions of electrons have been going on for a long time, then the total current flowing through the photodetector may be modeled as an infinite sum of current pulses, as shown by t ( ) ht ( ) k= where h(t- τ k ) is the current pulse generated at time τ k. The process (t) is a stationary process, called shot noise. 74 = τ k

5.10 Noise Shot Noise The number of electrons, N(t), emitted in the time interval (0, t) constitutes a discrete stochastic process, the value of which increase by one each time an electron is emitted. (Fig. 5.17) Let the mean value of the number of electrons, v, emitted between times t and t+t 0 be E ν = λt λ: a constant called the rate of the process The total number of electrons emitted in the interval (t, t+t 0 ) is ν = N( t+ t0 ) N( t) follows a Poisson distribution with a mean value equal to λt 0. The probability that k electrons are emitted in the interval (t, t+t 0 ) is k ( λt0 ) λk P[ ν = k] = e k = 0, 1, k! 75 [ ] 0 Fig. 5.17 Sample function of a Poisson counting process.

5.10 Noise Thermal Noise Thermal noise is the name given to the electrical noise arising from the random motion of electrons in a conductor. The mean-square value of the thermal noise voltage V TN, appearing across the terminals of a resistor, measured in a bandwidth of Δf Hertz, is given by: E V = 4 ktr f volts TN k : Boltzmann s constant=1.38 10-3 joules per degree Kelvin. T : Absolute temperature in degrees Kelvin. R: The resistance in ohms. 76

5.10 Noise White Noise The noise analysis is customarily based on an idealized form of noise called white noise, the power spectral density of which is independent of the operating frequency. White is used in the sense that white light contains equal amount of all frequencies within the visible band of electromagnetic radiation. We express the power spectral density of white noise, with a sample function denoted by w(t), as N SW f = N = kt ( ) 0 0 e The dimensions of N 0 are in watts per Hertz, k is Boltzmann s constant and T e is the equivalent noise temperature of the receiver. 77

5.10 Noise White Noise The equivalent noise temperature of a system is defined as the temperature at which a noisy resistor has to be maintained such that, by connecting the resistor to the input of a noiseless version of the system, it produces the same available noise power at the output of the system as that produced by all the sources of noise in the actual system. The autocorrelation function is the inverse Fourier transform of the power spectral density: N0 RW ( τ) = δ ( τ) Any two different samples of white noise, no matter how closely together in time they are taken, are uncorrelated. If the white noise w(t) is also Gaussian, then the two samples are statistically independent. 78

5.10 Noise Example 5.14 Ideal Low-Pass Filtered White Noise Suppose that a white Gaussian noise w(t) of zero mean and power spectral density N 0 / is applied to an ideal low-pass filter of bandwidth B and passband amplitude response of one. The power spectral density of the noise n(t) is S N ( f ) N 0, B< f < B = 0, f > B The autocorrelation function of n(t) is B N0 RN ( τ) = exp( jπfτ) df B = NBsinc Bτ 0 ( ) 79

5.11 Narrowband Noise The receiver of a communication system usually includes some provision for preprocessing the received signal. The preprocessing may take the form of a narrowband filter whose bandwidth is just large enough to pass the modulated component of the received signal essentially undistorted but not so large as to admit excessive noise through the receiver. The noise process appearing at the output of such a filter is called narrowband noise. Fig. 5.4 (a). Power spectral density of narrowband noise. (b). Sample function of narrowband noise, which appears somewhat similar to a sine wave of frequency f c, which undulates slowly in both amplitude and phase. 80

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components Consider a narrowband noise n(t) of bandwidth B centered on frequency f c, it can be represented as ( ) = ( ) cos( π ) ( ) sin( π ) n t n t ft n t ft I c Q c n I (t): in-phase component of n(t) n Q (t): quadrature component of n(t) Both n I (t) and n Q (t) are low-pass signal. Fig. 5.5 (a). Extraction of in-phase and quadrature components of a narrowband process. (b). Generation of a narrowband process from its in-phase and quadrature components. 81

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components n I (t) and n Q (t) of a narrowband noise n(t) have some important properties: 1) The n I (t) and n Q (t) of n(t) have zero mean. ) If n(t) is Gaussian, then n I (t) and n Q (t) are jointly Gaussian. 3) If n(t) is stationary, then n I (t) and n Q (t) are jointly stationary. 4) Both n I (t) and n Q (t) have the same power spectral density, which is related to the power spectral density S N ( f ) of n(t) as ( ) ( ), SN f fc + SN f + fc B f B SN ( f ) = S ( ) I N f = Q 0, otherwise 5) n I (t) and n Q (t) have the same variance as the narrowband noise n(t). 6) The cross-spectral density of n I (t) and n Q (t) of n(t) is purely imaginary ( ) ( ), j SN f + fc SN f fc B f B SN ( ) ( ) IN f = S Q NQN f = I 0, otherwise 7) If n(t) is Gaussian and its power spectral density S N (t) is symmetric about the mid-band frequency f c, then n I (t) and n Q (t) are statistically independent. 8

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components Example 5.17 Ideal Band-Pass Filtered White Noise. Consider a white Gaussian noise of zero mean and power spectral density N 0 /, which is passed through an ideal band-pass filter of passband magnitude response equal to one, mid-band frequency f c, and bandwidth B. The power spectral density characteristic of the filtered noise n(t) is shown in Fig. (a). The power spectral density characteristic of n I (t) and n Q (t) are shown in Fig. (c). 83

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components Example 5.17 Ideal Band-Pass Filtered White Noise The autocorrelation function of n(t) is the inverse Fourier transform of the power spectral density characteristic: fc+ B N fc+ B 0 N0 RN ( τ) = exp( jπ fτ) df + exp( jπ fτ) df fc B fc B = N0Bsinc B exp j f + exp j f = NBsinc B cos f 0 ( τ) ( π cτ) ( π cτ) ( τ) ( π τ) c The autocorrelation function of n I (t) and n Q (t) is given by: N I ( τ) = ( τ) = sinc( τ) R R NB B N Q 0 84

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components The narrowband noise n(t) can be represented in terms of its envelope and phase components: ( ) = ( ) cos π + ψ( ) n t r t ft c t ( ) = ( ) ( ) 1 I + Q ψ( t) r t n t n t Both r(t) and ψ(t) are sample functions of low-pass random processes. ( t) ( t) n 1 Q = tan ni r(t) : envelope of n(t); ψ(t) : phase of n(t) The probability distributions of r(t) and ψ(t) may be obtained from those of n I (t) and n Q (t). 85

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components Let N I and N Q denote the random variables obtained by observing the random processes represented by the sample functions n I (t) and n Q (t), respectively. N I and N Q are independent Gaussian random variables of zero mean and variance σ. Their joint probability density function is given by: f ( ) 1 I Q N, N ni, nq = exp I Q πσ Define:n I = r cosψ and n Q = r sinψ. We have dn I dn Q = r dr dψ. The joint probability density function of R and Ψ is: r r fr, Ψ ( r,ψ) = exp πσ σ The Ψ is uniformly distributed inside the range 0 to π. n + n σ 86

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components The probability density function of the random variable R is: f R ( r) r r exp, r 0 = σ σ 0, elsewhere (5.150) A random variable having the probability density function of (5.150) is said to be Rayleigh distributed. The Rayleigh distribution in the normalized form f V ( υ ) υ υexp, υ 0 = 0, elsewhere 87 Fig. 5.8 Normalized Rayleigh distribution

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components Example 5.18 Sinusoidal Signal Plus Narrowband Noise A sample function of the sinusoidal signal A cos(πf c t) plus narrowband noise n(t) is given by: ( ) = ( ) + ( ) x t Acos πft c n t Representing n(t) in terms of its in-phase and quadrature components around the carrier frequency f c ' ' ( ) = ( ) cos( π ) ( ) sin( π ) n ( t) = A+ n ( t) x t n t ft n t ft I c Q c I I Assume that n(t) is Gaussian with zero mean and variance σ. Both n I (t) and n Q (t) are Gaussian and statistically independent. The mean of n I (t) is A and that of n Q (t) is zero. The variance of both n I (t) and n Q (t) is σ. 88

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components The joint probability density function of the random variables N I and N Q, corresponding to n I (t) and n Q (t) is ( ) ' ( ' ) 1 ni A + n Q f ' n, I, nq = exp N N I Q πσ σ Let r(t) denote the envelope of x(t) and ψ(t) denote its phase. { } 1 ( ) = ' ( ) + ( ) ψ ( t) I Q r t n t n t The joint probability density function of the random variables R and Ψ is given by f r r + A Ar R, Ψ ( r,ψ) = exp πσ 89 ( t) ( t) n 1 Q ' ni = tan cos ψ σ

Representation of Narrowband Noise in Terms of In-phase and Quadrature Components The function f R,Ψ (r,ψ) cannot be expressed as a product f R (r)f Ψ (ψ). This is because we now have a term involving the values of both random variables multiplied together as r cos ψ. Rician distribution: π f r = f r,ψ dψ R ( ) ( ) 0 R, Ψ Modified Bessel function of the first kind of zeroth order. r r + A π Ar = exp exp cos ψ dψ 0 πσ σ σ The Rician distribution reduces to the Rayleigh distribution for small a, and reduces to an approximate Gaussian distribution when a is large. Fig 5.9 Normalized Rician distribution 90

Joint Density of Two Jointly Gaussian RVs 91

Joint Density of n Jointly Gaussian RVs The joint pdf of the Gaussian random variables is defined as 1 1 p( x, x,, x ) = exp ' n ( π ) ( det M) 1 ( x m ) M ( x m ) 1 n / 1/ x x where M denotes the n n covariance matrix, and m x denotes the column vector of mean values. n 1 9