Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14
Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy signal which is defined over [0, T ], E x = T The orthogonal expansion is given by, 0 x(t) = x 2 (t) dt < (1) x i φ i (t). (2) i=1 The coefficients x i which minimize the mean-square approximation error for a given N are given by x i = T 0 x(t)φ i (t) dt (3) As N, the approximation error goes to zero. We say that φ i (t), i = 1, 2,... form a complete orthonormal (CON) set. 2/14
It is observed that (look at equation (9)) E x = x 2 i (4) which is the Parseval s Theorem. It is possible to generate the coefficients using two different approaches: 1. correlation operation (figure 1) 2. filter operation (figure 2) i=1 3.3-3.8 Random Process Characterization In the probability review which was done at the beginning of the semester we have discussed this topic in detail. Please look at the textbook for alternative interpretations and examples. 3/14
Chapter 4: Detection of Signals-Estimation of Signal Parameters Detection The classical theory is extended to include observations which consist of continuous waveforms. The thermal noise can be modelled as a sample function from a Gaussian random process. In most systems, the spectrum of the thermal noise is flat over the frequency range of interest. (Spectral height of N 0 /2 joules). In figure 3, a case of known signals in the presence of additive white Gaussian noise is shown. In digital communication systems: the two types of error (say 1, when 0 was sent, and vice versa) are usually of equal importance. A signal is present in both hypothesis. 4/14
The probability of error is enough for measuring system performance. Error correction is possible. In radar/sonar systems, The errors have different importance. A signal is present in only one hypothesis. ROC is needed for performance. Error correction not possible. Estimation The estimation problem of signal parameters is encountered frequently in both communications and radar/sonar areas. The purpose of the receiver is to estimate the values of the successive A i and use these estimates to reconstruct the message (figure 4). 5/14
The approach in this chapter involves: 1. The observation consists of a waveform r(t) and hence it may be infinite dimensional. We therefore map the received signal into a convenient decision or estimation space. 2. In detection, decision regions are selected ROC or P (ɛ)is computed. In estimation, the variance or the mean-square error is computed. 3. The results are examined for possible improvement of the design. Detection and Estimation in White Gaussian Noise In the simple binary detection problem the following hypotheses are given: r(t) = Es(t) + w(t), 0 t T : H 1 = w(t), 0 t T : H 0 (5) It is assumed that the signals have unit energy. The problem is to observe r(t) over the interval [0, T ] and decide whether H 0 or H 1 is true. 6/14
The observation is continuous-time random waveform and hence the first step is to reduce it to a set of random variables (figure 5). (it may be a countably infinite set) r(t) = lim K K r i φ i (t) 0 t T. (6) i=1 The receiver may take the form of a correlation receiver or equivalently a matched filter receiver. The distance between the two signals in a general binary detection in Gaussian noise problem is given by d 2 = 2 N 0 (E 1 + E 0 2ρ E 0 E 1 (7) For fixed energies the best performance is obtained by making ρ = 1. Hence s 0 (t) = s 1 (t). (8) It should be noted that the signal shape is not important. 7/14
Linear Estimation The received waveform in additive white noise is given by r(t) = s(t, A) + w(t), 0 t T (9) where w(t) is a sample function from a white Gaussian noise process with spectral height N 0 /2. We wish to estimate A: If A is random, we will assume that the apriori density is known and use Bayesian estimation procedure. If A is a nonrandom variable we will use ML estimation. If s(t, A) is a linear mapping (superposition holds), the system is referred to as a linear signaling system. The estimator will be linear for various criterion of interest. For a linear system equation (9) becomes r(t) = A Es(t) + w(t), 0 t T (10) 8/14
Using the linearity property, the estimators are readily computed: r 1 (t) = T 0 r(t)s(t) dt (11) The probability density of r 1 given a = A is Gaussian G(A E, N 0 2 ). â ML (R 1 ) = R 1 E (12) If A is a random variable with probability density p a (A) then the MAP estimate is the value of A where l p (A) = 1 (R 1 A E) 2 + ln p a (A) (13) 2 N 0 /2 is a maximum. â MAP (R 1 ) = 2E/N 0 2E/N 0 + 1/σ 2 a R 1 E (14) It should be noted that the only difference between the two estimators is the 9/14
gain. The MAP estimate is also the Bayes estimate for a large class of other criteria (i.e. squared cost function etc.) as long as the a posteriori density is Gaussian. 10/14
x φ 1 ( t ) T 0 dt x 1 x( t ) x φ 2 ( t )... x φ N ( t ) T 0 T 0 dt dt x 2 x N Figure 1: Generation of expansion coefficients by correlation operation. 11/14
Sample at t= T φ 1 (T τ ) x1 x (t) φ 2 (T τ ) x2... φ N (T τ ) x N Figure 2: Generation of expansion coefficients by filter operation. 12/14
n(t) s(t) 0, 1, 1, 0 r(t) Source Transmitter + Receiver 0, 1, 1, 0 User Figure 3: A digital communications system. n(t) a(t) Sampler and Transmitter s(t, A i ) + r(t) Receiver PAM or PMF Figure 4: A parameter transmission system. 13/14
r(t) Decomposition into coordinates r infinite dimensional vector Rotation of Coordinates y l sufficient statistic Decision device Figure 5: Generation of sufficient statistics. 14/14