CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c B d =5 sec (iv) ( f) c T m = Hz (v) T m B d = 4 (b) (i) Frequency non-selective channel : This means that the signal transmitted over the channel has a bandwidth less that Hz. (ii) Slowly fading channel : the signaling interval T is T<<( t) c. (iii) The channel is frequency selective : the signal transmitted over the channel has a bandwidth greater than Hz. (c) The signal design problem does not have a unique solution. We should use orthogonal M=4 FSK with a symbol rate of 5 symbols/sec. Hence T =/5 sec. For signal orthogonality, we select the frequencies with relative separation f = /T = 5 Hz. With this separation we obtain /5= frequencies. Since four frequencies are requires to transmit bits, we have up to 5 th order diversity available. We may use simple repetition-type diversity or a more efficient block or convolutional code of rate /5. The demodulator may use square-law combining. Problem 4. : (a) P h = p 3 +3p ( p) where p = + γ c, and γ c is the received SNR/cell. (b) For γ c =, P h 6 +3 4 3 4 For γ c =, P h 9 +3 6 3 6 (c) Since γ c >>, we may use the approximation : P s ( )( ) L L L γc, where L is the order of 83
diversity. For L=3, we have : P s γ 3 c { Ps 5, γ c = P s 8, γ c = } (d) For hard-decision decoding : P h = L k= L+ ( ) L p k ( p) L k [4p( p)] L/ k where the latter is the Chernoff bound, L is odd, and p = + γ c. For soft-decision decoding : ( )( L P s L γ c ) L Problem 4.3 : (a) For a fixed channel, the probability of error is : P e (a) =Q ( ) a E. We now average this conditional error probability over the possible values of α, which are a=, with probability., and a= with probability.9. Thus : ( ) ( ) 8E 8E P e =.Q () +.9Q =.5 +.9Q (b) As E,P e.5 (c) When the channel gains a,a are fixed, the probability of error is : P e (a,a )=Q (a + a )E Averaging over the probability density function p(a,a )=p(a ) p(a ), we obtain the average probability of error : P e =(.) Q() +.9. Q ( ) 8E +(.9) Q ( ) 6E =.5 +.8Q ( ) ( ) 8E +.8Q 6E (d) As E,P e.5 84
Problem 4.4 : (a) T m = sec ( f) c T m =Hz B d =. Hz ( t) c B d = sec (b) Since W =5Hz and ( f) c Hz, the channel is frequency selective. (c) Since T= sec < ( t) c, the channel is slowly fading. (d) The desired data rate is not specified in this problem, and must be assumed. Note that with a pulse duration of T = sec, the binary PSK signals can be spaced at /T =. Hz apart. With a bandwidth of W=5 Hz, we can form 5 subchannels or carrier frequencies. On the other hand, the amount of diversity available in the channel is W/ ( f) c =5. Suppose the desired data rate is bit/sec. Then, ten adjacent carriers can be used to transmit the data in parallel and the information is repeated five times using the total number of 5 subcarriers to achieve 5-th order diversity. A subcarrier separation of Hz is maintained to achieve independent fading of subcarriers carrying the same information. (e) We use the approximation : ( )( L P L 4 γ c where L=5. For P 3 = 6, the SNR required is : ( ) 5 (6) = 6 γ c =.4 (. db) 4 γ c ) L (f) The tap spacing between adjacent taps is /5=. seconds. the total multipath spread is T m = sec. Hence, we employ a RAKE receiver with at least 5 taps. (g) Since the fading is slow relative to the pulse duration, in principle we can employ a coherent receiver with pre-detection combining. (h) For an error rate of 6, we have : ( )( ) 5 L P = 6 γ c =4.6 (6. db) L γ c 85
Problem 4.5 : (a) p (n,n )= e (n πσ +n )/σ U =E + N,U = N + N N = U E, N = U U +E where we assume that s(t) was transmitted. Then, the Jacobian of the transformation is : J = = and : p(u,u ) = e πσ σ [(U E) +(U (U E)) ] = e πσ σ [(U E) +U +(U E) U (U E)] = e πσ σ [(U E) + U U (U E)] The derivation is exactly the same forthe case when s(t) is transmitted, with the sole difference that U = E + N. (b) The likelihood ratio is : Λ= p(u [,u + s(t)) p(u,u s(t)) =exp ] σ ( 8EU +4EU ) > +s(t) or : ln Λ = 8E ( U σ ) U > +s(t) U U > +s(t) Hence β = /. (c) Hence: U = U U =E + (N N ) E [U] =E, σ U = 4 (σ n + σ n) =E p(u) = πen e (u E) /E (d) P e = P (U <) = πen e (u E) /E du = Q ( ) ( ) E EN = Q 4E (e) If only U is used in reaching a decision, then we have the usual binary PSK probability of error : P e = Q ( E ), hence a loss of 3 db, relative to the optimum combiner. 86
Problem 4.6: (a) [ L ] U = Re β k U k > k= where U k =Ea k e jφ k + vk and where v k is zero-mean Gaussian with variance Ek. Hence, U is Gaussian with : E [U] = Re [ Lk= β k E (U k ) ] = E Re [ ] Lk= a k β k e jφ k = E L k= a k β k cos (θ k φ k ) m u where β k = β k e jθ k. Also : σ u =E Hence : p(u) = L β k k k= πσu e (u mu) /σ u (b) The probability of error is : P = ( ) p(u)du = Q γ where : γ = E [ Lk= a k β k cos (θ k φ k ) ] Lk= β k k (c) To maximize P, we maximize γ. It is clear that γ is maximized with respect to {θ k } by selecting θ k = φ k for k =,,..., L. Then we have : γ = E [ Lk= a k β k ] Lk= β k k Now : Consequently : ( dγ L ) ( d β l = L ) β k k a l a k β k β l N ol = k= k= β l = a l l 87
and : γ = [ ] L a k k= N ok Lk= a k N Nok k The above represents maximal ratio combining. E = E L a k k= N ok Problem 4.7 : (a) p (u )= (σ) L (L )! ul e u /σ,σ =E ( + γ c ) p (u )= (σ) L (L )! ul e u /σ,σ =E P = P (U >U )= P (U >U U )p(u )du But : P (U >U U ) = u p(u )du = u (σ) L (L )! ul e u /σ du = = [ (σ ) L (L )! ul e u /σ ( σ ) ] u u ( σ )(L ) (σ ) L (L )! ul e u /σ du (σ ) L (L )! ul e u /σ + u (σ ) L (L )! ul e u /σ du Continuing, in the same way, the integration by parts, we obtain : Then : P = [ = L k= P (U >U U )=e u /σ e u /σ L k= ] (u /σ) k k! k!(σ ) k (σ ) L (L )! L k= The integral that exists inside the summation is equal to : u L +k e ua du = (u /σ ) k k! (σ ) L (L )! ul e u /σ du u L +k e u (/σ +/σ )/ du [ ] u L +k e ua L +k ( a) ( a) u L +k e ua du = L +k a u L +k e ua du 88
where a =(/σ +/σ )/ = σ +σ. Continuing the integration by parts, we obtain : σ σ Hence : P = L k= u L +k e ua du = ( σ (L +k)! = σ ) L+k (L +k)! al+k σ + σ k!(σ ) k (σ ) L (L )! u L +k e u (/σ +/σ )/ du = L k= = L k= = L k= k!(σ ) k (σ ) L (L )! ( ) L +k k ( σ σ σ +σ σ kσl = L (σ +σ ) L+k k= ) L+k (L +k)! ( ) L +k (EN (+ γ c)) k (E ) L k (E (+ γ c)+e ) L+k ( ) L +k (EN (+ γ c)) k (E ) L = ( ) L L k (E (+ γ c)) L+k + γ c k= which is the desired expression (4-4-5) with µ = γc + γ c. ( )( ) L +k + γc k k + γ c Problem 4.8 : T (D, N, J =)= J 3 ND 6 JND ( + J) J= = ND6 ND dt (D, N) N= = ( ND ) D 6 ND 6 ( D ) dn ( ND ) N= = (a) For hard-decision decoding : D 6 ( D ) P b β d P (d) = d=d free dt (D, N) dn N=,D= 4p( p) where p = [ γc + γ c ] for coherent PSK (4-3-7). Thus : P b [4p( p)]3 [ 8p( p)] (b) For soft-decision decoding, the error probability is upper bounded as P b β d P (d) d=d free 89
where d free =6, {β d } are the coefficients in the expansion of the derivative of T(D,N) evaluated at N=, and : ( ) µ d d ( ) (+µ ) d +k k P (d) = k where µ = γ c + γ c, as obtained from (4-4-5). These probabilities P b are plotted on the following graph, with SNR = γ c. k= 3 Pb Probability of a bit error 4 5 Soft dec. decoding Hard dec. decoding 6 4 6 8 4 6 8 SNR (db) Problem 4.9 : L U = U k k= (a) U k =Ea k + v k, where v k is Gaussian with E [v k ]=andσ v =E. Hence, for fixed {a k }, U is also Gaussian with : E [U] = L k= E (U k )=E L k= a k and σ u = Lσ v =LE. Since U is Gaussian, the probability of error, conditioned on a fixed number of gains {a k } is ( E Lk= ) a k P b (a,a,..., a L )=Q = Q E ( ) Lk= a k LEN L (b) The average probability of error for the fading channel is the conditional error probability averaged over the {a k }. Hence : P d = da da... da L P b (a,a,..., a L ) p(a )p(a )...p(a L ) 9
where p(a k )= a k exp( a σ k /σ ), where σ is the variance of the Gaussian RV s associated with the Rayleigh distribution of the {a k } (not to be confused with the variance of the noise terms). Since P b (a,a,..., a L ) depends on the {a k } through their sum, we may let : X = L k= a k and, thus, we have the conditional error probability P b (X) =Q ( EX/ (L ) ). The average error probability is : P b = P b (X)p(X)dX The problem is to determine p(x). Unfortunately, there is no closed form expression for the pdf of a sum of Rayleigh distributed RV s. Therefore, we cannot proceed any further. Problem 4. : (a) The plot of g( γ c ) as a function of γ c is given below :... g(gamma_c).9.8.7.6.5 3 4 5 6 7 gamma_c The maximum value of g( γ c ) is approximately.5 and occurs when γ c 3. (b) γ c = γ b /L. Hence, for a given γ b the optimum diversity is L = γ b / γ c = γ b /3. (c) For the optimum diversity we have : P (L opt ) <.5 γ b = e ln.5 γ b = e.5 γ b = e.5 γ b+ln For the non-fading channel :P = e.5γ b. Hence, for large SNR ( γb >> ), the penalty in SNR is:.5 log =5.3dB.5 9
Problem 4. : The radio signal propagates at the speed of light, c =3 8 m/ sec. The difference in propagation delay for a distance of 3 meters is T d = 3 =µ sec 3 8 The minimum bandwidth of a DS spread spectrum signal required to resolve the propagation paths is W =MHz. Hence, the minimum chip rate is 6 chips per second. Problem 4. : (a) The dimensionality of the signal space is two. An orthonormal basis set for the signal space is formed by the signals f (t) ={ T, t< T, otherwise f (t) = { T, T t<t, otherwise (b) The optimal receiver is shown in the next figure r(t) f ( T t) f (T t) t = T t = T r r Select the largest (c) Assuming that the signal s (t) is transmitted, the received vector at the output of the samplers is A T r =[ + n,n ] where n, n are zero mean Gaussian random variables with variance. The probability of error P (e s )is P (e s ) = P (n n > = πn A T A T ) A T dx = Q e x 9
wherewehaveusedthefactthen = n n is a zero-mean Gaussian random variable with variance. Similarly we find that P (e s )=Q [ ] A T,sothat P (e) = P (e s )+ P (e s A T )=Q (d) The signal waveform f ( T t) matchedtof (t) is exactly the same with the signal waveform f (T t) matchedtof (t). That is, f ( T t) =f (T t) =f (t) ={ T, t< T, otherwise Thus, the optimal receiver can be implemented by using just one filter followed by a sampler which samples the output of the matched filter at t = T and t = T to produce the random variables r and r respectively. (e) If the signal s (t) is transmitted, then the received signal r(t) is r(t) =s (t)+ s (t T )+n(t) The output of the sampler at t = T and t = T is given by r = A T r = A T 4 + 3A T T 4 + n = T T 4 + n = 5 A T 8 + n A T If the optimal receiver uses a threshold V to base its decisions, that is 8 + n r r s > < s V then the probability of error P (e s )is P (e s )=P (n n > A T 8 A T V )=Q V 8 N If s (t) is transmitted, then r(t) =s (t)+ s (t T )+n(t) 93
The output of the sampler at t = T and t = T is given by r = n T r = A T 4 + 3A A T T T 4 + n = 5 8 + n The probability of error P (e s )is P (e s )=P(n n > 5 A T 8 + V )=Q 5 A T 8 + V N Thus, the average probability of error is given by P (e) = P (e s )+ P (e s ) = Q A T 8 The optimal value of V can be found by setting differentiate definite integrals, we obtain ϑp (e) A T == ϑv 8 V + N Q 5 ϑp (e) ϑv A T 8 + V 5 A T + N 8 V N equal to zero. Using Leibnitz rule to V N or by solving in terms of V A T V = 8 (f) Let a be fixed to some value between and. Then, if we argue as in part (e) we obtain P (e s,a) = P (n n > A T P (e s,a) = P (n n > (a +) and the probability of error is 8 V (a)) A T P (e a) = P (e s,a)+ P (e s,a) 8 + V (a)) 94
For a given a, the optimal value of V (a) is found by setting we find that A T ϑp (e a) ϑv (a) equal to zero. By doing so V (a) = a 4 The mean square estimation of V (a) is V = V (a)f(a)da = 4 A T ada = 8 A T Problem 4.3 : (a) cos πf t M.F. () + M.F. () r (t) sin πf t cos πf t M.F. () + + M.F. () sin πf t sample at t = kt cos πf t M.F. () + Detector select the larger output M.F. () r (t) sin πf t cos πf t M.F. () + + M.F. () sin πf t 95
(b) The probability of error for binary FSK with square-law combining for L = is given in Figure 4-4-7. The probability of error for L = is also given in Figure 4-4-7. Note that an increase in SNR by a factor of reduces the error probability by a factor of when L = and by a factor of when D =. Problem 4.4 : (a) The noise-free received waveforms {r i (t)} are given by : r i (t) =h(t) s i (t), i =,, and they are shown in the following figure : r (t) 4A -A T t 4A r (t) A T/4 T T t -4A æ (b) The optimum receiver employs two matched filters g i (t) =r i (T t), and after each matched filter there is a sampler working at a rate of /T. The equivalent lowpass responses g i (t) ofthe two matched filters are given in the following figure : 96
4A g (t) -A T T t 4A g (t) T T t -4A æ Problem 4.5 : Since a follows the Nakagami-m distribution : p a (a) = ( ) m m a m exp ( ma /Ω ), a Γ(m) Ω where : Ω = E (a ). The pdf of the random variable γ = a E b / is specified using the usual method for a function of a random variable : a = γ dγ, E b da =ae b/ = γe b / Hence : p γ (γ) = ( ) dγ ( ) pa da γ E b = ( ) m ( ) m ( m γe b / Γ(m) Ω γ E b exp mγ E b /Ω ) = mm Γ(m) γ m Ω m (E b / ) m exp ( mγ/ (E bω/ )) = mm γ m exp ( mγ/ γ) Γ(m) γ m 97
where γ = E (a ) E b /. Problem 4.6 : (a) By taking the conjugate of r = h s h s + n [ ] [ r h h r = h h ][ s s ] [ n + n ] Hence, the soft-decision estimates of the transmitted symbols (s,s ) will be [ ŝ ŝ ] [ ] [ ] h h = r h h r [ h = r h r ] h +h h r + h r which corresponds to dual-diversity reception for s i. (b) The bit error probability for dual diversity reception of binary PSK is given by Equation (4.4-5), with L =andµ = γ c + γ c (where the average SNR per channel is γ c = E E[h ]= E ) Then (4.4-5) becomes P = [ ( µ)] { ( ) +[ ( + )} µ)]( = [ ( µ)] [ + µ] When γ c >>, then ( µ) /4 γ c and µ. Hence, for large SNR the bit error probability for binary PSK can be approximated as ( ) P 3 4 γ c (c) The bit error probability for dual diversity reception of binary PSK is given by Equation (4.4-4), with L =andµ as above. Replacing we get P = [ ( µ + )] µ µ µ Problem 4.7 : (a) Noting that µ<, the expression (4.6-35) for the binary event error probability can be 98
upperbounded by P (w m ) < ( µ = ( µ ) wm wm ) k= wm ( ) wm w m (( )) wm +k k Hence, the union bound for the probability for a code word error would be: P M < (M )P (d min ) < k( )( d min µ d min Now, taking the expression for µ for each of the three modulation schemes, we obtain the desired expression. Non-coherent FSK : DPSK : ) dmin µ = γ c µ = < γ = + γ c + γ c c R C γ b µ = γ c µ = + γ c ( + γ c ) < = γ c R C γ b BPSK : µ = γ c + γ c = + γ c. Using Taylor s expansion, we can approximate for large γ c ( x) / x/. Hence µ 4( + γ c ) < = 4 γ c 4R C γ b (b) Noting that exp ( d min R c γ b f( γ c )) = exp ( d min R c γ b ln(β γ c )/ γ c ) = exp( d min ln(β γ c )) = ( ) dmin β γ c = ( ) dmin βr c γ b we show the equivalence between the expressions of (a) and (b). The maximum is obtained with d d γ c f(γ) = β ln(β γ c) = ln(β γ β γ c γ c γ c )= γ c = e β By checking the second derivative, we verify that this extreme point is indeed a maximum. (c) For the value of γ c found in (b), we have f max ( γ c )=β/e. Then exp ( k(βd min γ b /ne ln )) = exp (k ln ) exp ( R c βd min γ b /e) = exp(k ln ) exp ( R c d min γ b f max ( γ b )) = k exp ( d min R c γ b f max ( γ b )) 99
which shows the equivalence between the upper bounds given in (b) and (c). In order for the bound to go to zero, as k is increased to infinity we need the rest of the argument of the exponent to be negative, or (βd min γ b /ne ln ) > γ b > Replacing for the values of β found in part (a) we get: γ b min,p SK =.96 db γ b min,dp SK =.75 db γ bmin,non coh.f SK = 5.76 db ne d min β ln γ bmin = e β ln As expected, among the three, binary PSK has the least stringent SNR requirement for asymptotic performance. 3