3F Random Processes Examples Paper (for all 6 lectures). Three factories make the same electrical component. Factory A supplies half of the total number of components to the central depot, while factories B and C each supply a quarter of the total number. It is known that 5% of components from factory A are defective, while 4% and % are defective from factories B and C, respectively. Define a suitable Sample Space (set of possible outcomes) for this problem and mark these outcomes as regions on a Venn Diagram. Identify the regions Made at Factory C and Component is faulty on the Venn Diagram. A randomly selected component at the depot is found to be faulty. What is the probability that the component was made by factory C?. In a digital communication system, the messages are encoded into binary symbols and. Because of noise in the system, the incorrect symbol is sometimes received. Suppose the probability of a being transmitted is.4 and the probability of a being transmitted is.6. Further suppose that the probability of a transmitted being received as a is.8 and the probability of a transmitted being received as a is.5. Find: (a) The probability that a received was transmitted as a. (b) The probability that a received was transmitted as a. (c) The probability that any symbol is received in error. 3. A random variable has a Cumulative Distribution Function (CDF) given by: Find: F X (x) (a) The probability that X >.5 (b) The probability that X.5 (c) The probability that.3 < X.7 { < x e x x <
4. Sketch the Probability Density Function (PDF) and Cumulative Distribution Function (CDF) for each of the following distributions (in parts (b) and (c) you will first need to calculate the normalizing constants c, and you may use a computer for these plots if you wish). Determine the mean µ and standard deviation σ of each, and mark µ and µ ± σ on the PDF sketches. Determine also the modes (locations of maximum probability) for the continuous pdfs (b) and (c). (a) Discrete Random Variable with three possible values: Pr{X.5} /5, Pr{X } /5 and Pr{X +} /5 (b) Laplace Distribution: f X (x) c exp( x ), < x < + (c) Rayleigh Distribution: f X (x) c x exp( x /), x < + 5. A real-valued random variable U is distributed as a Gaussian with mean zero and variance σ. A new random variable X g(u) is defined as a function of U. Determine and sketch the probability density function of X when g(.) takes the following forms: (a) g(u) U (b) g(u) U U + a for U < a (c) g(u) for a U a (a is a positive constant) U a for U > a 6. A darts player, aiming at the bullseye, measures the accuracy of shots in terms of x and y coordinates relative to the centre of the dartboard. It is found from many measurements that the x- and y-values of the shots are independent and normally distributed with mean zero and standard deviation σ. Write down the joint probability density function for the x and y measurements and write down an integral expression which gives the probability that x lies between a and b and y lies between c and d. Show that the Cumulative Distribution Function (CDF) for R, the radial distance of shots from the centre of the dartboard, is given by: F R (r) exp( r /(σ )), r < Determine the Probability Density Function for R. What type of distribution is this? If σ 3cm, what is the probability that the player s next shot hits the bullseye, which has a diameter of cm?
3 7. Two random variables X and Y have a joint Probability Density Function given by: f XY (x, y) Determine: { kxy x, y otherwise (a) The value of k which makes this a valid Probability Density Function. (b) The probability of the event X / AND Y > /. (c) The marginal densities f X (x) and f Y (y). (d) The conditional density f Y X (y x). (e) Whether X and Y are independent. 8. A particular Random Process {X(t)} has an ensemble which consists of four sample functions: X(t, α ), X(t, α ), X(t, α 3 ) sin(πt), X(t, α 4 ) cos(πt) where {α, α, α 3, α 4 } is the set of possible outcomes in some underlying Random Experiment, each of which occurs with equal probability. Determine the first order Probability Density Function (PDF) f X(t) (x) and the expected value of the process E{X(t)}. Is the process Strict Sense Stationary? 9. For each of the following random processes, sketch a few members of the ensemble, determine the expected values and autocorrelation functions of the processes and state which of them are wide sense stationary (WSS): (a) X(t) A (b) A is uniformly distributed between and. (c) X(t) cos(πf t + Φ) f is fixed; Φ is uniformly distributed between and ϕmax. This process is in general non-stationary. Are there any values of ϕmax for which the process is WSS? X(t) A cos(πf t + Φ) f is fixed; Φ is uniformly distributed between and π; A is uniformly distributed between and ; Φ and A are statistically independent. Is the process in part (a) Mean Ergodic?
4. Show the following results for a wide-sense stationary (WSS), real-valued random process {X(t)} with autocorrelation function r XX (τ) and power spectrum S X (ω): (a) r XX ( τ) r XX (+τ) ; (b) If {X(t)} represents a random voltage across a Ω resistance, the average power dissipated is P av r XX () ; (c) S X (ω) S X ( ω) ; (d) S X (ω) is real-valued; (e) P av S X (ω) dω. π. A white noise process {X(t)} is a Wide Sense Stationary, zero mean process with autocorrelation function: r XX (τ) σ δ(τ) where δ(τ) is the delta-function centred on τ whose area is unity and width is zero. Sketch the Power Spectrum for this process. A sample function X(t) from such a white noise process is applied as the input to a linear system whose impulse response is h(t). The output is Y (t). Derive expressions for the output autocorrelation function r Y Y (τ) E[Y (t) Y (t+τ)] and the cross-correlation function between the input and output r XY (τ) E[X(t) Y (t + τ)]. Hence obtain an expression for the output power spectrum S Y (ω) in terms of σ and the frequency response H(ω) of the linear system. If the process is correlation ergodic, suggest in block diagram form a scheme for measurement of r XY (τ). What is a possible application for such a scheme?. It is desired to predict a future value X(t + T ) of a WSS random process {X(t)} from the current value X(t) using the formula: where c is a constant to be determined. ˆX(t + T ) c X(t) If the process has autocorrelation function r XX (τ), show that the value of c which leads to minimum mean squared error between X(t + T ) and ˆX(t + T ) is c r XX(T ) r XX () Hence obtain an expression for the expected mean squared error in this case. If {X(t)} has non-zero mean, suggest an improved formula for ˆX(t + T ), giving reasons. Nick Kingsbury; October 9,
5 Tripos questions: This 3F course has been given since 3 and is similar in content to the Random Signals Course from the old third-year paper E4, and questions on random processes and probability from E4 Tripos papers up to will be also relevant. Answers:. /8. (a).95 (b).947 (c).6 3. (a).67 (b). (c).44 4. (a) µ.3; σ.6. (b) c.5; µ ; σ.44; mode. (c) c ; µ π.533; σ π 5. (a) πσ exp ( x σ ) for x < + ; elsewhere..655; mode. (b) exp ( ) x πxσ σ for x < + ; elsewhere. exp ( ) (x a) πσ for < x < σ (c) (Φ(a/σ) )δ(x) for x where Φ(x) x exp ( ) π exp( u /) du. (x+a) πσ for < x < σ 6. f R (r) r exp ( ) r σ σ for r < ; elsewhere. Pr(bullseye).54. 7. (a) 4 (b) 3/6 { x (c) f X (x) x elsewhere, f Y (y) { y y (d) f Y X (y x) elsewhere { y y elsewhere 8. f X(t) (x) [δ(x ) + δ(x + ) + δ(x sin(πt)) + δ(x cos(πt))] 4 E[X(t)] [sin(πt) + cos(πt) ] 4
6 9. (a) E[X(t)] / ; E[X(t )X(t )] /3.. (b) E[X(t)] [sin(ω t + ϕ max ) sin(ω t)]/ϕ max ; E[X(t )X(t )] [sin(ω (t + t ) + ϕ max ) sin(ω (t + t )) + ϕ max cos(ω (t t ))] / 4ϕ max Not WSS unless ϕ max nπ. (c) E[X(t)] ; E[X(t )X(t )] 6 cos(ω (t t )). S X (ω) σ r Y Y (τ) σ h(τ) h( τ) r XY (τ) σ h(τ) S Y (ω) σ H(ω) ( is convolution). Minimum MSE r XX() r XX(T ) r XX ()
3F Random Processes Examples Paper Solutions 3F Random Processes Examples Paper Solutions. Sample space: {AF, AF, BF, BF, CF, CF } where {A, B, C} are the factories and {F, F } are the faulty / not-faulty states. AF BF CF Venn diagram: AF BF CF (regions may be drawn with areas proportional to probabilities) Top row Made at C ; right column Component is faulty. Prob. that faulty component was made at C: P (C F ) P (F C) P (C) P (F ) P (F C) P (C) P (F A) P (A) + P (F B) P (B) + P (F C) P (C)./4.5/ +.4/4 +./4.5.5 +. +.5 8. Let {t, t} and {r, r} be the pairs of transmit and receive states.... P (t).4, P (t).6 and P (r t).8, P (r t).5 (a) (b) P (t r) P (t r) P (r t) P (t) P (r) ( P (r t)) P (t) ( P (r t)) P (t) + P (r t)) P (t) (.8).4 (.8).4 +.5.6.368.368 +.3.95 P (r t) P (t) P (r) ( P (r t)) P (t) ( P (r t)) P (t) + P (r t)) P (t) (.5).6 (.5).6 +.8.4.57.57 +.3.947
3F Random Processes Examples Paper Solutions (c) P (error) P (r t) P (t) + P (r t) P (t).8.4 +.5.6.6 3. (a) Pr{X >.5} F X (.5) e.5.665 (b) Pr{X.5} F X (.5) e.5. (c) Pr{.3 < X.7} F X (.7) F X (.3) e.3 e.7.44 4. (a) E[X] i x i p i.5 5 + 5 + 5.3 E[X ] x i p i.5 + +.45 5 5 5 i σ E[X ] (E[X]).45.9.6 (a) pdf of discrete RV (b) cdf of discrete RV.4 µ σ µ µ+σ.3...8.6.4...5.5.5 X..5.5.5 X (b) Normalising constant c must make the area under the pdf unity. f X (x) dx c e x dx c e x dx c [ e x] c... c.5
3F Random Processes Examples Paper Solutions 3 (c) pdf of Laplace distribution (d) cdf of Laplace distribution.5 µ σ µ µ+σ.4.8.3.6..4... 6 4 4 6 x E[X] σ E[X ] x f X (x) dx Use integration by parts: x f X (x) dx. 6 4 4 6 x x c e x dx from symmetry of f X x c e x dx c x e x dx [ x e x] + x e x dx + [ xe x] + e x dx + + [ e x]... σ c. Mode is where f X is max; i.e. at x (by inspection). x e x dx (c) Normalising constant c must make the area under the pdf unity. f X (x) dx c x e x / dx c e u du (if u x /) c [ e u] c... c (e) pdf of Rayleigh distribution (f) cdf of Rayleigh distribution.6 µ σ µ µ+σ.5.4.3...8.6.4.. 3 4 5 x. 3 4 5 x
4 3F Random Processes Examples Paper Solutions E[X] x f X (x) dx x (x e x / ) dx u / e u du Γ( 3) π π.533 E[X ]... σ x (x e x / ) dx E[X ] (E[X]) u e u du Γ() π.655 Mode is where gradient of pdf : Hence mode is at x d f X dx e x / x(x e x / ) when x 5. pdf of U: f U (u) πσ e u /σ cdf of U: F U (u) where Q(x) x u (a) X g(u) U : f U (α) dα Q(u/σ) π e α / dα is the Gaussian Error Integral function. F X (x) P ({u : u x}) P ({u : x u x}) F U (x) F U ( x) for < x < F U (x) [ F U (x)] F U (x)... f X (x) d dx F X(x) d dx F U(x) for < x < f U (x) πσ e x /σ for < x < (a) pdf of X g(u) U (b) pdf of X g(u) U.8.5.6.4..5 3 4 5 6 x / sigma 3 4 5 6 x / sigma
3F Random Processes Examples Paper Solutions 5 (b) X g(u) U : F X (x) P ({u : u x}) P ({u : x u x}) F U ( x) F U ( x) for < x < F U ( x)... f X (x) d F dx X(x) d F dx U( x) for < x < x f U( x) πσ x e x/σ for < x < U + a for U < a (c) X g(u) for a U a U a for U > a Consider the 3 regions separately: i. U < a, X < : F X (x) P ({u : u + a x}) P ({u : u x a}) F U (x a)... f X (x) f U (x a) πσ e (x a) /σ for x < ii. a U a, X : Pr{X } P ({u : a u a}) F U (a) F U ( a) F U (a) This is a probability mass at x, so iii. U > a, X > : f X (x) [F U (a) ]δ(x) [ Q(a/σ)]δ(x) for x F X (x) P ({u : u a x}) P ({u : u x + a}) F U (x + a)... f X (x) f U (x + a) πσ e (x+a) /σ for x > (c) pdf of X g(u) U+a,, U a; if a sigma.8.6.4. 3 3 x / sigma
6 3F Random Processes Examples Paper Solutions 6. X and Y each have a zero-mean Gaussian pdf given by f X (x) πσ e x /σ and f Y (y) Since X and Y are independent, the joint pdf is given by πσ e y /σ f X,Y (x, y) f X (x) f Y (y) +y )/σ e (x πσ Hence Pr{a < X b, c < Y d} b xa d πσ yc b f X,Y (x, y) dy dx xa d yc e (x +y )/σ dy dx In polar coordinates so the above expression converts to x r cos θ and y r sin θ Pr{r < R r, θ < θ θ } r πσ θ θ πσ θ r /σ e r θ r r /σ r r e r dθ dr To get the cdf for R, we integrate over all angles so that θ θ π, and we integrate from r to r r to get F R (r) Pr{ < R r, π < θ π} π r r e r /σ dr πσ [ e r /σ ] r /σ e r for r < This is a Rayleigh distribution. For a cm diameter bullseye, max radius cm, and if σ 3cm: Pr{bullseye} Pr{R cm} F R () e /( 9).544 dr
3F Random Processes Examples Paper Solutions 7 7. (a) Valid pdf: [ ] x [ ] y kxy dx dy k... k 4 k 4 (b) (c) (d) (e) Pr{X.5, Y >.5} f X (x).5 x.5 x f XY (x, y) dy y.5 4x 4xy dy [ xy ] 4xy dy dx 3 4 dx 3 y for x < and x >.5 [ ] x.5 x x [ 4xy ] 3 6 x for x f Y (y) y for y (by symmetry) for y < and y > f Y X (y x) f XY (x, y) f X (x) y f X (x) f Y (y) x y 4xy f XY (x, y) Therefore X and Y are independent, by definition. y.5 dx 8. For any particular t, X(t) can take one of the 4 values, X(t, α i ), each with equal probability. Hence: f X(t) (x) [δ(x ) + δ(x + ) + δ(x sin(πt)) + δ(x cos(πt))] 4... E[X(t)] x f X(t) (x) dx [ + sin(πt) + cos(πt)] 4 4 [sin(πt) + cos(πt) ] Since f X(t) (x) and E[X(t)] depend on t, the process is not SSS.
8 3F Random Processes Examples Paper Solutions 9. (a) X(t) A: [ ] a E[X(t)] X(t, a) f A (a) da a da r XX (t, t ) E[X(t ) X(t )] X(t, a) X(t, a) f A (a) da a da 3 These are independent of t, hence the process is WSS. (b) X(t) cos(πf t + Φ): E[X(t)] X(t, ϕ) f Φ (ϕ) dϕ [ sin(ω t + ϕ) ϕ max ] ϕmax ϕmax X(t, ϕ) dϕ ϕ max sin(ω t + ϕ max ) sin(ω t) ϕ max ϕmax cos(ω t + ϕ) ϕ max dϕ r XX (t, t ) E[X(t ) X(t )] X(t, ϕ) X(t, ϕ) f Φ (ϕ) dϕ ϕmax ϕmax ϕ max cos(ω t + ϕ) cos(ω t + ϕ) f Φ (ϕ) dϕ [cos(ω (t + t ) + ϕ) + cos(ω (t t ))] [ ϕmax sin(ω (t + t ) + ϕ)] ϕ max dϕ + ϕ max ϕ max cos(ω (t t )) 4ϕ max [sin(ω (t + t ) + ϕ max ) sin(ω (t + t )) + ϕ max cos(ω (t t ))] Mean and r XX depend on t, so the process is not in general WSS. However if ϕ max nπ (integer n), mean is zero and hence is independent of t. If ϕ max nπ, r XX cos(ω (t t )) which depends only on τ (t t ). If both of the above are satisfied then the process is WSS. Hence the process is not WSS unless ϕ max nπ (integer n). (c) X(t) A cos(πf t + Φ): Ensemble is now any sample function from (a) times any from (b). Think now of a multivariate sample space which represents pairs {A, Φ}. Since A and Φ are independent: f A,Φ (a, ϕ) f A (a) f Φ (ϕ)
3F Random Processes Examples Paper Solutions 9 Hence Similarly E[X(t)] X(t, a, ϕ) f A,Φ (a, ϕ) da dϕ a cos(ω t + ϕ) f A (a) f Φ (ϕ) da dϕ π a da cos(ω t + ϕ) dϕ π from (a) and (b): 4π [sin(ω t + π) sin(ω t)] E[X(t ) X(t )] a cos(ω t + ϕ) cos(ω t + ϕ) f A (a) f Φ (ϕ) da dϕ a da π π cos(ω t + ϕ) cos(ω t + ϕ) dϕ from (a) and (b): 3 cos(ω (t t )) 6 cos(ω τ), τ t t Hence this process is WSS. Is the process in (a) Mean Ergodic? The time average is the mean over t for a single realisation a: E t [X(t, a)] lim T T T T X(t, a) dt lim T T T T T a a dt lim T T The ensemble average is the mean over A for a single time instant t: E A [X(t, a)] X(t, a) f A (a) da from (a) a These two averages are not the same, so the process in (a) is (even though it is stationary). not ergodic
3F Random Processes Examples Paper Solutions. (a) r XX (τ) E[X(t) X(t + τ)] E[X(t + τ) X(t)] Let t t + τ: E[X(t ) X(t τ)] r XX ( τ) (b) Instantaneous power V /R X (t)/ Average power, P av E[X (t)] r XX () (c) S X (ω) Let τ τ: from (a): S X ( ω) r XX (τ) e jωτ dτ r XX ( τ ) e jωτ dτ r XX (τ ) e jωτ dτ (d) S X (ω) τ τ in st integral: from (a): r XX (τ) e jωτ dτ r XX (τ) e jωτ dτ + r XX ( τ) e jωτ dτ + r XX (τ) [e jωτ + e jωτ ] dτ r XX (τ) cos(ωτ) dτ All terms in the integral are real, so the result is real. r XX (τ) e jωτ dτ r XX (τ) e jωτ dτ Real (e) From (b): P av r XX () S X (ω) e jω dω S X (ω) dω π π
3F Random Processes Examples Paper Solutions. Power Spectrum: S X (ω) r XX (τ) e jωτ dτ σ δ(τ) e jωτ dτ σ e jω σ Hence the power spectrum is a constant, σ, over all frequencies ω. For the linear system: Y (t) h(t) X(t) h(β) X(t β) dβ (convolution) Autocorrelation function: r Y Y (τ) E[Y (t)y (t + τ)] [( ) ( E h(β )X(t β ) dβ )] h(β )X(t + τ β ) dβ [ E ] h(β ) h(β ) X(t β ) X(t + τ β ) dβ dβ h(β ) h(β ) E [X(t β ) X(t + τ β )] dβ dβ h(β ) h(β ) r XX (τ + β β ) dβ dβ h(β ) h(β ) σ δ(τ + β β ) dβ dβ σ h(β τ) h(β ) dβ σ h( τ) h(τ) Cross-correlation function: r XY (τ) E[X(t)Y (t + τ)] [ ( )] E X(t) h(β)x(t + τ β) dβ [ ] E h(β) X(t) X(t + τ β) dβ h(β) E [X(t) X(t + τ β)] dβ h(β) r XX (τ β) dβ h(β) σ δ(τ β) dβ σ h(τ) Taking Fourier Transforms of the above expression for r Y Y (τ): since H (ω) is the transform of h( τ). S Y (ω) σ H (ω) H(ω) σ H(ω)
3F Random Processes Examples Paper Solutions If the process is Correlation Ergodic, the correlations may be estimated from expectations over time:... r XY (τ) lim T T T T for some suitably large choice of T. X(t) Y (t + τ) dt T T T X(t τ) Y (t) dt The integration over T can be approximated by a lowpass filter with an impulse response whose main lobe is of duration roughly T between its half-amplitude points. The block diagram should therefore show X(t) being passed through a delay of τ, then multiplied by Y (t), and the result being passed through the lowpass filter to produce ˆr XY (τ), which is the estimate of r XY (τ). This is known as a Correlometer. If the delay τ is varied slowly compared with the T response time of the filter, then the full function ˆr XY (τ) can be obtained. The main application of this scheme is System Identification, in which the impulse response of the system is estimated from ĥ(τ) ˆr XY (τ) σ. Instantaneous error: Mean square error (MSE): ε(t) X(t + T ) ˆX(t + T ) X(t + T ) c X(t) P ε E[ε (t)] E[X (t + T ) c X(t + T ) X(t) + c X (t)] since X is WSS: ( + c ) E[X (t)] c E[X(t + T ) X(t)] Differentiating to find min: So the min MSE is: d P ε dc... c min ( + c ) r XX () c r XX (T ) c r XX () r XX (T ) at min r XX (T ) r XX () P ε,min ( + c min) r XX () c min r XX (T ) ( ) + r XX(T ) r rxx() XX () r XX(T ) r XX () r XX() + rxx(t ) rxx(t ) r XX () r XX() r XX(T ) r XX ()
3F Random Processes Examples Paper Solutions 3 To see the effect of a non-zero mean, assume that X(t) is zero-mean, and that the non-zero mean process is Y (t) X(t) + µ. Hence r Y Y (τ) E[Y (t + T ) Y (t)] E[(X(t + T ) + µ) (X(t) + µ)] r XX (τ) + µ Using the given formula to predict Y (t + T ) from Y (t) results in an MSE of P ε,min r Y Y () r Y Y (T ) r Y Y () (r Y Y () + r Y Y (T ))(r Y Y () r Y Y (T )) r Y Y () (r XX() + µ + r XX (T ) + µ )(r XX () r XX (T )) r XX () + µ ( + r XX(T ) + µ ) (r r XX () + µ XX () r XX (T )) Because r XX (T ) < r XX (), we see that the addition of µ to r XX will always increase P ε,min if the original predictor is used. Hence we get the minimum MSE if our predictor is applied to a zero-mean process. To optimally predict the non-zero mean process Y (t), we should therefore subtract the mean µ and then apply the predictor to the zero-mean remainder X(t), to give: Ŷ (t + T ) µ c min (Y (t) µ) or Ŷ (t + T ) c min Y (t) + ( c min )µ where c min r XX(T ) r XX () r Y Y (T ) µ r Y Y () µ This will then leave P ε,min unaffected by changes in µ, the mean component of Y.