Digital Modulation 2 Lecture Notes Ingmar Land and Bernard H. Fleury Department of Electronic Systems Aalborg University Version: November 15, 2006
i Contents 1 Continuous-Phase Modulation 1 1.1 General Description................. 1 1.2 Minimum-Shift Keying................ 6 1.3 Gaussian Minimum-Shift Keying.......... 11 1.4 State and Trellis of CPM.............. 19 1.5 Coherent Demodulation of CPM.......... 26 2 Modern Representation of CPM 30 2.1 Initialization..................... 31 2.2 The Reference Phase Trajectory........... 33 2.3 The Reference Frequency.............. 37 2.4 The Tilted Phase and the New Symbols...... 38 2.5 Block Diagram of the CPM Transmitter...... 42 3 Laurent Representation of CPM 44 3.1 Minimum Shift Keying................ 44 3.2 Precoded MSK.................... 51 3.3 Precoded Gaussian MSK.............. 57 3.4 Modulation in EDGE................ 66 4 Trellis-Coded Modulation 72 4.1 Motivation...................... 72 4.2 Convolutional Codes................. 81 4.3 Goodness of a TCM Scheme............. 83 4.4 Set Partitioning................... 87 4.5 Trellis Coded Modulation.............. 88 4.6 Examples for TCM.................. 92
ii A ECB Representation of a Band-Pass Signal 102 A.1 The Block diagram.................. 104 A.2 The Signal...................... 106 A.3 Effect of QA Mod/Demod on Signal........ 107 A.4 Effect of QA Demodulation on Noise........ 110 A.5 Noise After LP Filtering and Sampling....... 113
iii References [1] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd ed. Prentice-Hall, 2002. [2] J. G. Proakis, Digital Communications. McGraw-Hill, 1995. [3] P. Laurent, Exact and approximate construction of digital phase modulations by superposition of amplitude modulated pulses (AMP), IEEE Trans. Commun., vol. 34, no. 2, pp. 150 160, Feb. 1986. [4] B. Rimoldi, A decomposition approach to CPM, IEEE Trans. Inform. Theory, vol. 34, no. 2, pp. 260 270, Mar. 1988. [5] C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, vol. 27, pp. 379 423, 623 656, July and Oct. 1948.
1 1 Continuous-Phase Modulation The Classical Representation Continuous-phase modulation allows for a bandwith-efficient transmission. In this section, it is represented in the classical way. The modern representation is discussed in the following section. 1.1 General Description The general description of a CPM signal reads X(t; α) = 2P cos ( 2πf 0 t + ϕ(t; α) ) (1) with the following notation: The (semi-infinite) sequence of information-bearing symbols is α = (α 0, α 1,...) with the symbols α n A = {±1, ±3,..., ±M 1} α n A = {0, ±2,..., ±M 1} if M is even if M is odd The mostly used case is that where M is even. (Or even a power of 2. Why?) Example: Binary CPM M = 2: α n { 1, +1}.
2 The time-variant information-bearing phase ϕ(t; α) = ϕ 0 + 2πh α n q(t nt ) Remark: In some cases (see later), L 1 known symbols are transmitted at the beginning. Then the summation starts with (L 1). The modulation index is h = p 1 p 2, where p 1, p 2 are natural numbers that are relatively prime. n=0 The function q(t) is the phase reponse. It satisfies q(t) = 0 for t 0, q(t) = 1 2 for t > LT. q(t) 1 2 T... LT t The value L is the memory of the CPM system: L = 1 L > 1 : full response CPM : partial response CPM The phase ϕ 0 is arbitrary. Without loss of generality, we can assume that ϕ 0 is set such that ϕ(0; α) = 0. The carrier frequency is f 0. The power of the transmitted signal is P. The symbol rate is 1 T.
3 Instantaneous Frequency of a CPM signal The derivative f(t; α) = 1 d [ ] 2πf 0 t + ϕ(t; α) 2π dt = f 0 + 1 d [ ] ϕ(t; α) 2π dt = f 0 + h d [ ] α n q(t nt ) dt f(t; α) = f 0 + h n=0 α n q (t nt ) n=0 }{{} f(t;α) (frequency deviation) q (t) = d dt q(t) of q(t) is called the frequency response of the CPM signal. We can rewrite the CPM signal from (1) as a function of the instantaneous frequency according to x(t; α) = ( t 2P cos 2π f( t; α) d t ) = ( 2P cos 2πf 0 t + 2πh 0 [ t 0 α n q ( t nt ) d t n=0 ])
4 Properties of the Frequency Response q(t) = 0, t < 0 q (t) = 0, t < 0 q(t) = 1 2, t > LT q (t) = 0, t > LT q(lt ) = 1 2 LT 0 q ( t)d t = 1 2 Example: Sketch a valid frequency response for L = 4.
5 Block diagram of a CPM transmitter (a) Using a phase modulator α(t) = α m δ(t nt ) n=0 1 2 q(t)... LT t ψ(t; α) Phase Modulator x(t; α) (L = 2) 2πk α(t) ψ(t; α) f 0 +1 +1 +1 +πk +2πhq(t T ) 0 T 1 1 1 t T t πk 2πhq(t T ) (b) Using a frequency modulator α(t) = α m δ(t nt ) n=0 q (t) 1 2 f(t; α) V C D x(t; α) LT t (L = 2) h α(t) f(t; α) f 0 0 +1 +1 +1 T t T t 1 1 1
6 1.2 Minimum-Shift Keying MSK has the following parameters: Binary modulation symbols, i.e., M = 2 Modulation index h = 1 2 Phase response q MSK (t) = 0 t < 0 1 2 t T 0 t < T 1 2 t T q MSK (t) 1 2 T t Example: MSK Assume the transmitted symbol sequence α = [ 1 + 1 + 1 1 + 1]. Sketch the phase and the signal.
7 Example: MSK, continued Transmitted symbols α n : ϕ(t; α) -1 1 1-1 1 + π 2 T t π 2 1 d 2π dt 1 d 2π dt ϕ(t; α) = 1 4T 1 π/2 ϕ(t; α) = 2π T = 1 4T f = 1 2T x(t; α) t f 1 = 1 T f 2 = 3 2 T f = 1 1 2T. = min. freq- sep for orthogonality
8 Phase tree of MSK Phase 6π/2 5π/2 2π 3π/2 π π/2 π/2 π 3π/2 2π 5π/2 6π/2 T 2T 3T 4T 5T 6T t
9 Instantaneous frequency of MSK q MSK(t) = 0 for t < 0 1 2T for 0 t < T 0 for t T q MSK (t) 1 T T t Then f(t; α) = f 0 + 1 α n q 2 MSK(t nt ) n=0 }{{} f(t;α) Example: -1 1 1-1 f(t; α) 1 1 4T T 2T t 1 4T
10 In symbol interval [nt, (n + 1)T ), we have with α n { 1, +1}. f(t; α) = f 0 + 1 2 α n Therefore the frequency offset is = f 0 + α n 4T 1 2T f = ( f 0 + 1 ) ( f0 1 ) 4T 4T = 1 2T
11 1.3 Gaussian Minimum-Shift Keying Problem How to reduce the side lobes in the power spectrum of MSK while keeping the essential features of this modulation scheme that coherent demodulation is possible? Solution 1. Insert in the MSK VCO-based (non-causal) transmitter a smoothing filter with impulse response g B (t) in front of the MSK frequency response. q MSK (t) Reduction of the side lobes! α(t) = α n δ(t nt ) n=0 g B (t) q (t) q MSK (t) 1 2T VCO x(t; α) T T t T t h f 0 Smoothing filter MSK-transmitter 2. Design g B (t) such that the overall frequency response q (t) = g B (t) q MSK(t) integrates to 1 2 as does q MSK (t). Coherent demodulation is possible!
12 The indexing parameter B is the 3 db bandwidth of the impulse response of the smoothing filter. F{ g B (t)} 2 3dB B f Smoothing filter for GMSK GMSK results by selecting the impulse response of the smoothing filter to be a Gaussian pulse g B (t) = 1 exp 2πσT t2 2σ 2 T 2 (2) with σ = ln 2 2πBT Hence, q GMSK(t) = g B (t) q MSK(t) (3)
13 We show now that q GMSK (t) has the required property: + q GMSK(t) dt = 1 2 (4) Proof We can view 2 q MSK (t) and g B(t) as the probability density functions (pdfs) of two independent random variables, say U and V : U 2 q MSK(t) V g B (t) Then from (3), 2 q GMSK (t) is the pdf of the sum W = U + V Equ. (4) follows then because the integral of pdfs equals unity. Selected values of B GSM : BT = 0.3 T = DECT : BT = 0.5 6 1.625 = 3.69ms
14 Example Gaussian Pulse gb(t) 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 BT=0.2 BT=0.3 BT=0.5-4 -3-2 -1 0 1 2 3 4 t/t Frequency response 0.5 0.45 g GMSK (t) 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 BT=0.2 BT=0.3 BT=0.5-4 -3-2 -1 0 1 2 3 4 t/t
15 Phase response q GMSK (t) = t q GMSK( t ) d t qgmsk(t) 0.7 0.6 0.5 0.4 0.3 0.2 0.1 BT=0.2 BT=0.3 BT=0.5-4 -3-2 -1 0 1 2 3 4 t/t
16 Casual implementation of a GMSK transmitter A casual implementation of a GMSK transmitter is obtained by truncating the non-casual frequency response q GMSK (t) outside a centered interval [ LT 2, ] +LT 2 normalizing to 1/2, and time delaying by LT 2 : q GMSK(t) = { 1 2 1 A q GMSK (t LT 2 ) for t [0, LT ) 0 elsewhere with A = + LT 2 q GMSK( t) d t LT 2 A practical value is L = 3. 0.5 0.45 g GMSK (t) 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 BT=0.2 BT=0.3 BT=0.5 0 0.5 1 1.5 2 2.5 3 t/t
17 Casual phase response q GMSK (t) = t q GMSK( t ) d t qgmsk(t) 0.7 0.6 0.5 0.4 0.3 0.2 0.1 BT=0.2 BT=0.3 BT=0.5 0 0.5 1.0 1.5 3 2.0 2.5 t/t
18 Phase tree of GMSK Power spectrum of GMSK
19 1.4 State and Trellis of CPM Let us consider the phase of a GMSK signal within some arbitrary signaling interval say [nt, (n + 1)T ). 1 2 q GMSK (t (n 3)T ) q GMSK (t nt ) q GMSK (t (n 1)T ) q GMSK (t (n 2)T ) (n 3)T (n 2)T (n 1)T nt (n + 1)T time We consider first the special case L = 3 as depicted above. For t [nt, (n + 1)T ), we can write: ϕ(t; α) = ϕ 0 + π α i q GMSK (t it ) i=0 n 3 = ϕ 0 + π i=0 α i 1 2 + n In the general case where L is arbitrary n L α i ϕ(t; α) = ϕ 0 + π 2 i=0 }{{} θ n + n 1 i=n (L 1) i=n 2 = ϕ ( t; α n, θ n, α n 1,..., α n (L 1) }{{} = ϕ(t; α n, S n ) for t [nt, (n + 1)T ) S n α i q GMSK (t it ) α i q(t it ) + α n q(t nt ) )
20 Thus, the behaviour of ϕ(t; α) in the signaling interval [nt, (n + 1)T ) is entirely determined by the (L + 1)-dimensional vector ( ) αn, θ n, α n 1,..., α }{{ n (L 1) } S n S with α n : current transmitted symbol S n : state of ϕ(t; α) in n-th signaling interval S : state space
21 State transitions (trellis) for GMSK with L = 3 Notice that θ n+1 = θ n + π 2 α n (L 1) α n = 1 S S n S n+1 α n = 1 (0, 1, 1) (0, 1, 1) (0, 1, 1) (0, 1, 1) (0, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( 2π 2 ( 2π 2 ( 2π 2 ( 2π 2 ( 3π 2 ( 3π 2 ( 3π 2 ( 3π 2, 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1) (0, 1, 1) (0, 1, 1) (0, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( π 2, 1, 1) ( 2π 2 ( 2π 2 ( 2π 2 ( 2π 2 ( 3π 2 ( 3π 2 ( 3π 2 ( 3π 2, 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1), 1, 1)
22 Phase behavior within one signaling interval as a function of the state transitions for GMSK S n ϕ(t; α n ; S n ) ( π 2, 1, 1) 2π ( 3π 2, 1, 1) (0, 1, 1) (0, 1, 1) 3 π 2 ( 2π 2 ( 3π 2, 1, 1), 1, 1) α m = +1 ( 3π 2, 1, 1) ( π 2, 1, 1) 2 π 2 α m = 1 ( 2π 2, 1, 1) ( 2π 2, 1, 1) (0, 1, 1) π 2 ( π 2, 1, 1) ( 3π 2, 1, 1) τ 0 nt (n + 1)T t Initialization ϕ 0 = θ 0 = 0 1 i= (L 1) α i q(t it ) where α 1,..., α (L 1) are known symbols
23 Implementation of a CPM transmitter with look-up tables A. Using a phase modulator α n Compute the phase state α n Look-up table S n (α n, s n ) ϕ(τ; α n, S n ) ϕ(τ; α n, S n ) Phase modulator f 0 x(t; α) B. Using an inphase/quadrature modulator Consider the inphase/quadrature representation of x(t; α), where we assume ϕ 0 = 0: x(t; α) = 2Es T s cos(2πf 0 t + ϕ(t; α)) = 2Es cos ϕ(t; α) cos(2πf 0 t) } T s {{} In-phase component 2Es T s sin ϕ(t; α) } {{ } Quadrature component sin(2πf 0 t)
α n Compute the phase state αn S n Look-up table (α n, S n ) cosϕ(τ; α n, S n ) Look-up table (α n, S n ) sinϕ(τ; α n, S n ) cosϕ(t; α n, S n ) cos(2πf 0 t) sinϕ(t; α n, S n ) sin(2πf 0 t) 2E s T x(t; α) 24
25 Phase state computation α n Compute the phase state S n α n 1 α n 2 α n T T.... T T α L 1 α L 1 ( n L ) α i i=0 mod4 for GMSK (L = 3): α n 1 α n 2 ( n 3 ) α n T T T α i α n 1 α n 2 i=0 mod4
26 1.5 Coherent Demodulation of CPM The initial state and the final state of the CPM transmitter are set to known states using the following procedure: Before transmitting the data, L known symbols are transmitted to drive the CPM transmitter into a known initial state. After transmitting the data, L known symbols are transmitted to to drive the CPM transmitter into a known final state. Block of transmitted symbols.... α 0 α N 1.... L 1 known symbols t = 0 N data symbols α L + 1 known symbols ML decoding for the AWGN channel Received signal: Y (t) = x(t; α) + W (t) X(t; α) Y (t) W (t) White Gaussian noise with spectral height N 0 2
27 ML estimation ˆα = argmax α A N { (N+L+1)T 0 y(t) x(t; α) dt } = argmax α A N { N+L n=0 (n+1)t nt y(t) x(t; α) dt } = argmax α A N { N+L [ (n+1)t n=0 nt y(t) cos ϕ(t; α n, S n ) cos(2πf 0 t) dt (n+1)t nt y(t) sin ϕ(t; α n, S n ) sin(2πf 0 t) dt ]}
28 ML receiver Bank of correlators y(t) cos(2πf 0 t) sin(2πf 0 t) (n+1)t nt (n+1)t nt ( )dt cosϕ(t; α n, S n ) α n Q S n S Bank of correlators ( )dt sinϕ(t; α n, S n ) α n Q S n S............ Viterbi algorithm with states S ˆα
29 Performance of MSK and GMSK Performace degradation of GMSK with respect to MSK
2 Modern Representation of CPM In the modern representation of CPM [4], the CPM transmitter is decomposed into a vector encoder with memory and a memoryless modulator. 30 u j { { x j } Vector } Memoryless x(t) = Encoder Modulator x j (t jt ) j=0 This modern representation of CPM has already been used in the description of minimum-shift keying (MSK) in the lecture notes for Digital Modulation I. (See therein.)
31 2.1 Initialization We assume that the CPM transmitter is initialized with admissible symbols. we select α 1 = α 2 =... = α (L 1) = (M 1) Notice that (M 1) is a symbol common to both alphabets corresponding to M odd and even. The phase of CPM is then ϕ(t; α) = ϕ 0 + 2πh j= (L 1) α j q(t jt ) where α = (α 0, α 1,...) is the sequence of information-bearing symbols. The transmitted CPM signal reads: x(t; α) = 2P cos(2πf 0 t + ϕ(t; α))
32 Without loss of generality, we assume that ϕ 0 is selected in such a way that ϕ(0; α) = 0. Thus we obtain ϕ 0 = 2πh 0 α j q( jt ) = 2πh = 2πh j= (L 1) L 1 j=0 L 1 j=1 = 2πh(M 1) α j q(jt ) α j q(jt ) L 1 j=0 q(jt )
33 2.2 The Reference Phase Trajectory We focus on the phase trajectory corresponding to the sequence α = ( (M 1), (M 1),... ) Inserting yields ϕ(t; α ) = ϕ 0 2πh(M 1) j= (L 1) q(t jt ) We investigate the behaviour of ϕ(t; α ) over each signaling interval [nt, (n + 1)T ). To this end, it is worth introducing the relative time variable τ [0, T ). The absolute time variable t within this signaling interval is related to τ according to t = nt + τ (i) Consider: n = 0, t = τ ϕ 0 (τ) = ϕ(τ, α ) = ϕ 0 2πh(M 1) 0 q(τ jt ) = ϕ 0 2πh(M 1) j= (L 1) L 1 (The index in ϕ 0 (τ) is the value of n.) j=0 q(τ + jt ) Inserting for ϕ 0, we obtain L 1 ϕ 0 (τ) = 2πh(M 1) (q(τ + jt ) q(jt )) j=0
34 L 1 ϕ 0 (τ) = 2πh(M 1) (q(τ + jt ) q(jt )) j=0 q(t) 1 2 q(t + τ) q(t ) q(2t + τ) q(2t ) L = 3 τ T T + τ 2T 2T + τ 3T t Notice that ϕ 0 (0) = 0 ϕ 0 (T ) = 2πh(M 1)q(LT ) = πh(m 1) Behavior of ϕ 0 (τ): ψ 0 (τ) T τ πh(m 1)
35 (ii) Consider: n = 1, t = T + τ ϕ 1 (τ) = ϕ(t + τ; α ) = ϕ 0 2πh(M 1) = ϕ 0 2πh(M 1) = ϕ 0 2πh(M 1) 1 j= (L 1) L 1 j= 1 q(τ + T jt ) q(τ + (j + 1)T ) L q(τ + jt ) j=0 L 1 = 2πh(M 1) q(τ + jt ) j=0 = πh(m 1) + ϕ 0 (τ) L 1 j=0 q(jt ) πh(m 1) (iii) Consider: n 1, t = nt + τ ϕ n (τ) = ϕ(nt + τ, α ) = ϕ 0 2πh(M 1) = ϕ 0 2πh(M 1) = ϕ 0 2πh(M 1) n j= (L 1) (L 1)+n j=0 L 1 j=0 = nπh(m 1) + ϕ 0 (τ) q(τ + nt jt ) q(τ + jt ) q(τ + jt ) + nπh(m 1)
36 ϕ(t; α) T 2T 3T nt (n + 1)T t πh(m 1) 2πh(M 1) πh(m 1) t T 3πh(M 1) ϕ(t; α ) nπh(m 1) ϕ0(τ) nπh(m 1) ϕ0(τ) 2πh(M 1) ϕ0(τ) πh(m 1) ϕ0(τ)
37 2.3 The Reference Frequency For the sake of simplifying the presentation we assume that ϕ(t; α ) = πh(m 1) t T or equivalently that ϕ 0 (τ) = πh(m 1) τ T In order for ϕ 0 (τ) to exhibit this behavior, q(t) must satisfy the condition 2πh(M 1) which holds if L 1 j=0 L 1 j=0 (q(τ + jt ) q(jt )) = πh(m 1) τ T (q(τ + jt ) q(jt )) = τ 2T Thus for α = α, the CPM signal transmitted reads x(t; α ) = 2P cos ( 2πf 0 t πh(m 1) t T ) (5) with = 2P cos(2πf 1 t) f 1 = f 0 h(m 1) 2T This frequency f 1 is the retained reference frequency with respect to which the phase of CPM is expressed. x(α) = ( 2P cos 2πf 1 t + ϕ(t; α) + πh(m 1) t ) }{{ T} φ(t; α)
38 2.4 The Tilted Phase and the New Symbols The so called tilted phase φ(t; α) reads φ(t; α) = ϕ(t; α) + πh(m 1) t T = ϕ 0 + 2πh α j q(t jt ) + πh(m 1) t T j= (L 1) We investigate the behavior of φ(t; α) within each signaling interval. (i) Consider: n = 0, t = τ [0, T ) φ 0 (τ; α) = φ(0 T + τ; α) = ϕ 0 + 2πh L 1 j=0 α j q(τ + jt ) + πh(m 1) τ T As we have (5) due to the assumption regarding ϕ 0 (τ), we conclude that Inserting yields L 1 τ T = 2 = 2 φ 0 (τ; α) = 2πh j=0 L 1 j=0 L 1 j=0 q(τ + jt ) 2 q(τ + jt ) L 1 j=0 q(jt ) [ πh(m 1)] 1ϕ0 α j q(τ + jt ) + 2πh(M 1) L 1( ) = 2πh α j + (M 1) q(τ + jt ) j=0 L 1 j=0 q(τ + jt )
39 We introduce the new information-bearing symbols U j = 2( 1 αj + (M 1) ) { } 0, 1,..., M 1 Notice that the new symbol alphabet is similar regardless of whether M is odd or even. Making use of these new symbols, we can recast φ 0 (τ; α) according to φ 0 (τ; α) = φ(τ; α) = 4πh L 1 j=0 U j q(τ + jt )
40 (ii) Consider: n 1, t = nt + τ φ n (τ; α) = φ(nt + τ; α) n = ϕ 0 + 2πh j= (L 1) + πh(m 1) nt + τ T = ϕ 0 + 2πh n+(l 1) j=0 α j q(τ + nt jt ) α n j q(τ + jt ) + πh(m 1)n + πh(m 1) τ T = ϕ 0 + 2πh = 4πh = 4πh = 4πh + 2πh L 1 j=0 L 1 j=0 L 1 j=0 L 1 j=0 n+(l 1) j=l α n j q(τ + jt ) + πh(m 1) τ T α n j 1 2 U n j q(τ + jt ) + πh U n j q(τ + jt ) + 2πh U n j q(τ + jt ) + 2πh + πh(m 1)n ( n L j= (L 1) n L j= (L 1) ( n L [ αj + (M 1) ]) U j j= (L 1) U j ) mod p 2 } {{ } V n Remember the definition of the modulation index: h = p 1 /p 2.
41 Let us define V n = ( n L U j ) j= (L 1) mod p 2 Then φ n (τ; α) = 4πh L 1 j=0 U n j q(τ + jt ) + 2πhV n = φ ( τ; U n, U n 1,... U n (L 1), V n ) The vector S n := ( ) U n 1,... U n (L 1), V n defines the state in the new description (modern description). (Cf. definition pf state in the traditional description.)
2.5 Block Diagram of the CPM Transmitter The previous discussions lead to the following block diagram. x(τ; un; sn) 2P 42 un cos(2πf1τ) Look-up table un 1 un 2 u n (L 2)...... un cos(φ(τ; un; Sn)) u n (L 1) vn vn T T... T T pa Look-up table u n (L 1) u n (L 2) un 2 sin(φ(τ; un; Sn)) un 1 sin(2πf1τ) un Encoder Memoryless modulator
43 Example: GMSK with L = 3 Tilted phase tree: 6 5 4 φ(t; α)/π 3 2 1 0 1 0 1 2 3 4 5 6 7 t/t Trajectories of φ ( τ; U n, S n ) : φ(τ; un, sn)/π 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 τ/t
3 Laurent Representation of CPM CPM signals can be equivalently represented by a superposition of PAM signals [3]. In this section we use only the first term of this representation as a (fairly) good approximation of the CPM signal. 44 3.1 Minimum Shift Keying The MSK Signal The MSK signal is given by 2Eb ( ) x(t; α) = T cos 2πf 0 t + ϕ(t; α) with the information-bearing phase t [0, NT ). ϕ(t; α) = π N 1 n=0 α n q MSK (t nt ), The information symbols are { } α n 1, +1, n = 0,..., N 1. The phase pulse q MSK (t) has the following shape: q MSK (t) 1 2 T t
45 In the nth signaling interval, t [nt, nt + T ), t = nt + τ, τ [0, T ), we have n 1 α i ϕ n (τ; α) = π + π 2 2 i=0 }{{} θ n τ T α n. Remember that θ n is the phase at the beginning of the nth signaling interval. Notice that θ n {{ } 0, π for n even { π 2, } 3π 2 for n odd The initialization is θ 0 = 0.
46 In-phase and quadrature components of MSK Consider t [0, NT ). The MSK signal can be decomposed into its inphase and its quadrature components: Eb ( ) x(t; α) = T cos ϕ(t; α) 2 cos ( 2πf 0 t ) }{{} x I (t; α) Eb ( ) T sin ϕ(t; α) 2 sin ( 2πf 0 t ) }{{} x Q (t; α) Consider now the pulse p MSK (t) = { 1 T cos( πt 2T ) for t [ T, +T ], 0 elsewhere. It is normalized such that it has energy one, i.e., +T T ( p MSK (t)) 2dt = 1. p MSK (t) 1/ T T 0 T t
47 Using this pulse, the inphase component x I (t; α) and the quadrature component x Q (t; α) can be written as follows: x I (t; α) = = Eb ( ) T cos ϕ(t; α) N x 1,n p MSK (t nt ) n=0 n even x 1,n = E b cos (θ n ) { E b, + E b } x Q (t; α) = = Eb ( ) T sin ϕ(t; α) N x 2,n p MSK (t nt ) n=1 n odd x 2,n = E b sin (θ n ) { E b, + E b } The symbol α N is a known suffix symbol. The modulation symbols can thus be considered as twodimensional symbols ( ) X1,n X n = S X 2,n out of the signal constellation {( ) ( ) ( ) ( )} + Eb Eb 0 0 S =,, 0 0 +, E b E b }{{}}{{}}{{}}{{} s 1 s }{{ 2 s } 3 s }{{ 4 } n even n odd
48 Illustration of the signal constellation: s 3 n even s 2 s 1 s 4 n odd Example: Phase Trajectories Draw the phase and the trajectories of the inphase and quadrature components for the following example. n α n θ n cos θ n sin θ n 0 +1 θ 0 = 0(init) 1 0 1 1 π 2 0 1 2 +1 0 1 0 π 3 +1 2 0 1 N 1 = 4 +1 π 1 0 Suffix N = 5 (+1) 3π 2 0 1 (1)
49 Complex Representation The Vector X n = (X 1,n, X 2,n ) T can be conceived as the complex number X n = X 1,n + jx 2,n = E b exp(jθ n ) θ n = π 2 n 1 i=0 α i The (complex-valued) signal constellation is thus j s 3 s 2 1 s 1 1 j s 4 The values of the complex-valued modulation symbol can be recursively computed: X n = ( E b exp j π 2 n 1 i=0 α i ) ( = exp j π ) 2 α n 1 E ( b exp j π 2 = jα n 1 X n 1 n 2 i=0 α i )
50 Notice that ( exp j π ) 2 α n 1 = (j) α n 1 = jα n 1 as α n 1 { 1, +1}. Thus the MSK signal can be written as {[ N } 2 X(t; α) = R X n p MSK (t nt )] exp(j2πf0 t) for t [0, NT ). n=0 The initialization is X 0 = j α 1 X 1 = E b. This can be achieved by X 1 = j E b, α 1 = 1. Block Diagram α n T α n 1 jα n 1 x n p MSK (t) Re{ } x(t; α) j x n 1 T N δ(t nt ) 2 exp(j2πf0 t) n=0
51 3.2 Precoded MSK Precoding The information symbols are mapped from the {0, 1} representation to the {+1, 1} representation: u n α n u n 0 +1 α n 1 1 {0, 1} { 1, +1} The precoding can be done in both representations: u n u n u n 1 0 +1 α n α n 1 1 1 T u n 1 u n 0 +1 α n α n α n 1 1 1 T α n 1 Notice that u n 1 u n (addition in F 2 GF (2)) corresponds to α n 1 α n (multiplication in R). This justifies the particular mapping.
52 Complex Representation Transmitted symbols: X n = j(α n 1 α n 2 )X n 1, n = 0,..., N 1 Initialization: α 2 = α 1 = 1 X 1 = j E b The symbols are generated as follows: X n n 0 j j E b = E b = j 0 α 1 Eb 1 j α 0 α 1 j 0 α 1 Eb = j 1 α 0 Eb 2 j α 1 α 0 j 1 α 0 Eb = j 2 α 1 Eb 3 j α 2 α 1 j 2 α 1 Eb = j 3 α 2 Eb.. n j n α n 1 Eb This gives the block diagram of precoded MSK using the complex representation of the symbols: α n T α n 1 x n x(t; α) p MSK (t) Re{ } j n N δ(t nt ) n=0 2 exp(j2πf0 t)
53 Vector Representation n even: X n = (j) n E b α n 1 = (j 2 ) n 2 E b α n 1 = ( 1) n 2 Eb α n 1 (real-valued) ( ) X n = ( 1) n T 2 Eb α }{{ n 1, 0 }}{{} X n,1 X n,2 n odd: X n = (j) n E b α n 1 = j(j) n 1 E b α n 1 = j( 1) n 1 2 Eb α n 1 (imaginary-valued) ( n 1 ) T X n = }{{} 0, ( 1) 2 Eb α }{{ n 1 } X n,1 X n,2 This results in the following block diagram: 2 cos(2πf0 t) X n,1 p MSK (t) α n T α n 1 N δ(t nt ) n=0 X(t; α) t [0, 1] n even Eb ( 1) n 2 X n,2 p MSK (t) n odd nt 2 sin(2πf 0 t)
α n α n α n 1 T ( 1) n 2 n even n odd α n 1 T ( 1) n 2 Eb Eb nt nt Xn, 1 p MSK (t) N δ(t nt ) n=0 X n,2 p MSK (t) V n,1 x n,1 r n V n,2 x n,2 r n 2 cos(2πf0 t) 2 sin(2πf 0 t) Y n,1 Y n,2 Vector decoder X(t; α) t [0, 1] W (t) ˆα n 1 Y (t) 2 cos(2πf0 t) T p MSK (t) p MSK ( t) = p MSK (t) 0 2 sin(2πf 0 t) T p MSK (t) R pp (τ) = non-causal filter t r n = R pp (nt ) + nt E[V n,i V m,j ] = δ ij N 0 2 r m n Vector decoder p MSK (t)p MSK (t + τ)dt ˆα n 1 Demodulation in the AWGN Channel 54
55 The effective discrete-time impulse response is r n = R pp (nt ), R pp (τ) = p MSK (t) p MSK (t + τ) dt. The noise after sampling has the covariance (mean is zero) ] E [V n,i V M,j = δ i,j N 0 2 r m n. The optimum vector decoder for MSK can be simplified. To see this, consider the impulse response: r n = R pp r n = 1 for n = 0, r 1 for n = 1, 0 elsewhere. r n 2T T 0 T 2T n Thus, in the upper (lower) branch, symbols are effectively transmitted at even (odd) time indices.
56 Example for the upper branch: x n,1 x 0,1 T 2T 3T 4T 5T n x 2,1 x 4,1 x n,1 r n x 0,1 r 1 x 0,1 r 1 x 2,1 r 1 x 2,1 r 1 x 4,1 r 1 x 4,1 n x 2,1 x 4,1 Sampling time upper branch The optimum vector decoder for the AWGN channel can thus be described as follows. V n,1 X n,1 r n Y n,1 α n T α n 1 V n,2 Sign( ) ˆα n 1 X n,2 ( 1) n 2 Eb r n Y n,2 ( 1) n 2 nt Remark: More information about the discrete-time representation of QAM-like signals is provided in the appendix.
57 3.3 Precoded Gaussian MSK The lenght of the phase pulse is set to L = 3, as before. Precoded GMSK Signal The signal reads x(t; α) = 2Eb ( ) T cos 2πf 0 t + ϕ(t; α) with ϕ(t; α) = π N 1 n= 2 (α n α n 1 ) q GMSK (t nt ) + ϕ 0 where α n { 1, +1}; prefix symbols (GSM): α n = 1 for n = 3, 2, 1, sufffix symbols (GSM): α n = 1 for n = N, N + 1, N + 2 GMSK phase pulse q GMSK (t): q GMSK (t) 0.6 0.5 0.4 0.3 0.2 0.1 0 0.5 1 1.5 2 2.5 3 t/t
58 ϕ 0 is set so that ϕ(0; α) = 0. Hence, ϕ 0 = π 2 n=1 q GMSK (nt ) Within the n-th signaling interval, i.e., for t [nt, nt + T ), n = 0,..., N 1. ϕ n (τ; α) = ϕ 0 + π n 3 (α i α i 1 ) 2 + i= 2 n i=n 2 (α i α i 1 ) q GMSK (τ) In-phase and Quadrature Components The signal can be decomposed as X(t; α) = Eb cos ϕ(t; α) 2 cos(2πf 0 t) } T {{} X I (t; α) Eb sin ϕ(t; α) 2 sin(2πf 0 t). } T {{} X Q (t; α)
59 The inphase component and the quadrature component can be approximated as follows: X I (t; α) N+2 n= 2 n even X 1,n p GMSK (t nt ) X 1,n = ( 1) n 2 E b α n 1 X Q (t; α) N+1 n= 1 n odd X 2,n p GMSK (t nt ) X 2,n = ( 1) n 2 E b α n 1 The pulse p GMSK (t) is often denoted as c 0 (t) pulse, referring to [3], and it is defined as follows. Let LT, L = 3, be the length of the truncated GMSK pulse q GMSK (t). Then c 0 (t) = s 0 (t) s 1 (t) s 2 (t) (for larger L, more pulses have to be multiplied) with ( ) s l (t) = sin φ(t + lt ) { π q GMSK (t) for t < LT φ(t) = π 2 π q GMSK(t) for t LT The resulting pulse c 0 (t) has length (L + 1)T = 4T.
60 The resulting pulse is depicted for L = 3: p GMSK (t) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 T 2T 3T 4T t
61 Block diagram of precoded GMSK Using the vector representation: 2 cos(2πf0 t) X n,1 p GMSK (t) α n T α n 1 N+2 n= 2 δ(t nt ) X(t; α) n even ( 1) n 2 Eb X n,2 p GMSK (t) n odd nt 2 sin(2πf 0 t) Using the complex representation: α n T α n 1 X n p GMSK (t) Re{ } x(t; α) j n Eb N+2 n= 2 δ(t nt ) 2exp(j2πf0 t) Initialization (prefix symbols): α 3 = α 2 = α 1 = 1, X 2 = j 2 α 3 Eb = E b. Termination (suffix symbols): α N = α N+1 = α N+2 = +1.
α n α n α n 1 T ( 1) n 2 n even n odd α n 1 T ( 1) n 2 Eb Eb nt nt X n,1 N+2 n= 2 X n,2 X n,1 X n,2 r n r n δ(t nt ) p GMSK (t) p GMSK (t) V n,1 V n,2 2 cos(2πf0 t) Vector decoder t [0, 1] W (t) {ˆα n } N 1 n=0 2 cos(2πf0 t) 2 sin(2πf 0 t) 2 sin(2πf 0 t) p GMSK (t) p GMSK (t) r n = R pp (nt ) R pp (τ) = LT 2 LT 2 nt Y n,1 Y n,2 Vector decoder p GMSK (t)p GMSK (t + τ)dt {ˆα n } N 1 n=0 Demodulation of precoded GMSK in the AWGN channel 62
63 The effective impulse response has the (deterministic) autocorrelation function: R pp (τ) = p GMSK (t) p GMSK (t + τ) dt R pp (τ) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 τ/t We have approximately r n = 1 for n = 0 0.05 for n = 2 0 for n > 2 Hence, ISI occurs in each branch. For optimum decoding, the Viterbi algorithm (VA) may be applied.
64 (Close-to-optimum) Vector Decoder V n,1 0 n N + 2 n even nt X n,1 r n Two-state VA {ˆα n } N 1 n=0,n even α n T α n 1 V n,2 ˆα ( 1) n 2 Eb X n,2 r n Two-state VA {ˆα n } N 1 n=1,n odd nt 2 n N + 2 nt n odd 0 n N + 2 nt Metrics to be used in the upper-branch VA (even n): λ(α e ) = N 1 n=0 n even ( 1) n 2 E b α n 1 [y n r 1 ( 1) n 2 1 ] E b α n 3 Metrics to be used in the lower-branch VA (odd n): λ(α o ) = N 1 n=1 n odd ( 1) n 2 E b α n 1 [y n r 1 ( 1) n 2 1 ] E b α n 3
65 GSM Burst Structure Time slot 1 control bit 1 control bit Information bits (57) 3 Tail bits Training sequence (26) Normal burst Information bits (57) 8,25 guard bits 3 Tail bits Information bits (39) Training sequence (64) Information bits (39) 8,25 guard bits Synchronization burst
66 3.4 Modulation in EDGE Objective Modify the GSM format in order to increase the bit rate while keeping the spectrum requirement of this modulation scheme. Selected Solution Extend the signal space in the Laurent representation of GMSK 8PSK, i.e., 3 bits / symbol Include an additional rotation of the symbols rotation of 3π/8 in each signaling interval Signal Constellation and Mapping (0, 0, 0) Im (U 1, U 2, U 3 ) (0, 1, 0) (0, 1, 1) (0, 0, 1) (1, 1, 1) Re (1, 0, 1) (1, 0, 0) (1, 1, 0)
67 Phase Rotation Rotation by 3π 8 in each signaling interval Im Im 1 0 3π 8 1 0 3π 8 2 7 2 7 3 6 Re 3 6 Re 4 5 4 5 By comparison, for 8 PSK without such phase rotation: Im Im 3 2 1 3 2 1 4 0 4 0 Re Re 5 7 5 7 6 6
68 Pulse Shaping p(t) = p GMSK (t) p GMSK (t) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 T 2T 3T 4T t EDGE Transmitter [U n,1, U n,2, U n,3 ] X n X(t; u) p GMSK (t) Re{ } exp(jn 3π 8 ) Eb N 1 δ(t nt ) n=0 2 exp(j2πf0 t)
69 EDGE Receiver Y (t) p GMSK (t) Y n Vector decoder [Ûn,1, Ûn,2, Ûn,3] 2 exp( j2πf0 t) nt Discrete-time Representation of the EDGE System [U n,1, U n,2, U n,3 ] X n r n Vector decoder V n [Ûn,1, Ûn,2, Ûn,3] exp(jn 3π 8 ) The effective impulse response is r n = R pp (nt ) R pp (τ) = LT p GMSK (t) p GMSK (t + τ) dt 0 The noise has the covariance ] E [V n V n+l = N 0 r l.
70 R pp (τ) 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 τ/t Optimum Vector Decoder for EDGE For optimum vector decoding of EDGE in the AWGN channel, the Viterbi algorithm may be applied. The corresponding trellis has J = 8 2 = 64 states and 8 transitions per state. The branch metric is λ(u) = N 1 i=0 { ( R x n exp jn 3π 8 ) [ ( y n r 1 x n 1 exp j(n 1) 3π ( 8 r 2 X n 2 exp j(n 2) 3π )]} 8 )
71 Discrete-time Model Using the complex-valued representation, the discrete-time model of EDGE is give by the following block diagram: V n [U n,1, U n,2, U n,3 ] X n r n 64-state VA [Ûn,1, Ûn,2, Ûn,3] exp(jn 3π 8 ) The vector with information bits looks like [ [U0,1 ] [ ] ] U =, U 0,2, U 0,3,..., UN 1,1, U N 1,2, U N 1,3 Burst format of EDGE Time slot Tail Sym (3) Data symbols (57) Control sym (1) Training sequence (26) Control sym (1) Data symbols (57) Tail Sym (3) Guard symbols (8,25) Normal burst
72 4 Trellis-Coded Modulation 4.1 Motivation We consider the vector representation of a digital communication system transmitting across the AWGN channel. [U 0,..., U K 1 ] [X 0,..., X N 1 ] [Y 0,..., Y N 1 ] [Û0,..., ÛK 1] {0, 1} K Encoder Decoder {0, 1} K [W 0,..., W N 1 ] We restrict our attention to the case where the dimension of the signal constellation X = {S 0..., S M 1 } is one or two. Example: MPAM: dim X = 1 MPSK (M > 2) and QPAM : dim X = 2. 2-AM 4-PSK 8-PSK 4-AM 8-AM 8-AMPM 16-QASK 16-AM 32-AMPM 64-QASK
73 For these two cases, the vector channel is either of the form. dimx = 1 X j Y j W j N (0, N 0 2 ), W 0,..., W N 1 independent dimx = 2 X j1 Y j1 X j W j1 W j2 W j N (( ) ( )) 0 N0 /2 0, 0 0 N 0 /2 Y j X j2 Y j2 In both cases, the noise samples are independent.
74 Rate of the encoder The Encoder maps bit-blocks onto modulation symbols x X = { } s 0,..., s M 1. Its rate is R = K N bits/symbol Two equivalent necessary conditions for the encoder to be one-toone are 2 K M N R = K N log 2(M) In the case of uncoded modulation, we have N = 1, K = log 2 (M) R = log 2 (M) Bit Error Probability The BER is defined as P b = 1 K K 1 j=0 P [ ] U j Ûj
75 Capacity of the AWGN Channel With the constraint that the transmitted vectors belong to a finite signal constellation X = {s 1..., s M }: C X = max p(s 1 ),...,p(s M ) M p(s m ) m=1 ( p ( ) ( p ( ) ) y s m p(sm ) ) y s m log2 M m =1 p( ) dy y s m p(sm ) Remark The capacity C X depends on the signal constellation X. Dropping the restriction [ that X is finite while assuming a given average power E s = E X j 2], C := log 2 ( 1 + SNR ), SNR = E s N 0 The dimension of C is bit / transmitted symbol.
76 Tight lower bounds for the capacities of the signal constellations given on page 72: Channel Coding Theorem Provided R < C, P b can be made arbitrarily small by selecting appropriate encoders. In other words, reliable communication is possible at rates R < C. Converse of the Channel Coding Theorem If R > C, reliable communication is impossible.
77 Example: Some comparisons Uncoded 4 PSK: R = 2 bits/t P b = 10 5 at SNR = 12.9 db Coded 8 PSK: Remark: C = 2 bits/symbols at 5.9 db
78 Thus, with coded 8PSK it is theoretically possible to transmit reliably 2 bits/symbol at SNR = 5.9 db. Coding gain of coded 8PSK transmitting 2 bits/symbol versus uncoded 4PSK at BER = 10 5. 12.9 5.9 = 7 db Coding gain of coded digital transmission across the AWGN channel with no restriction on the signal constellation except the average transmitted power, transmitting (on average) 2 bits/symbol versus coded 4PSK at BER = 10 5 : 12.9 4.7 = 8.2 db Similar figures for the above coding gains hold when other signal constellations are considered. Remark By doubling the number of vectors in a signal constellation X, containing 2 K vectors, almost all is achieved in terms of coding gain for reliable transmission of K bits/t versus uncoded transmission using the signal constellation X. (The transmission rates are the same!)
79 A way to realize the above coding gains by doubling the number of vectors in the signal constellation is as follows: U j,1 V j,1 U j,k.. Encoder with rate r = K K+1.. Vector mapping. {0, 1} K+1 X V j,k V j,k+1 X = 2 K+1 X j {0, 1} K {0, 1} K+1 X Open issues: 1. how to select the encoder, and 2. how to design the vector mapping, to obtain a scheme for which P b is small. For the encoder, block coding or convolutional coding may be used. In many schemes, convolutional codes are applied due to their low decoding complexity. (The Viterbi algorithm is applicable.) A schemes that consists of the combination of a convolutional encoder with a vector mapping device is called Trellis-Coded Modulation (TCM).
80 Block Diagram of a TCM Scheme U j,1 V j,1 U j,k... Convolutional Encoder r = K K+1 memory length ν V j,k V j,k+1 Vector mapping {0, 1} K+1 X X = 2 K+1 X j {0, 1} K {0, 1} K+1 X The overall rate is given by R = r log 2 (K + 1) = K Objective Find TCM schemes that achieve a good bit-error-rate performance. Example: Coded 8PSK U j,1 T T V j,1 100 011 010 101 001 X j U j,2 V j,2 V j,3 110 (V 3, V 2, V 1 ) 111 000 The parameters are r = 2/3, ν = 2, R = 2/3 log 2 8 = 2.
81 4.2 Convolutional Codes Two examples of convolutional encoders. (See e.g. Proakis or Morelos-Zaragoza for a general description.) Example: 1 V j,1 U j,1 T T V j,2 U j,2 V j,3 Code rate: Memory length: r = number of input bits number of output bits = 2 3 ν = number of shift registers = 2 Trellis: 00 110 000 100 010 00 10 01 States 01 101 001 01 labels: (v 3, v 2, v 1 ) 011 011 11 001 111 11 101
82 Example: 2 U j,1 T U j 1,1 U j,2 T U j 2,2 V j,1 V j,2 V j,3 Code rate: r = 2 3 Memory length: ν = 2 Trellis: 00 00 10 10 01 01 11 11
83 4.3 Goodness of a TCM Scheme Admissible sequences Let A denote the set of sequences (x 0, x 1,...) generated by a given TCM scheme. These sequences are called admissible for the considered TCM scheme. Example: 1 continued A A / {x j } = ( s 0, s 0, s 0, s 0, s 0,... ) {x j } = ( s 0, s 4, s 0, s 0, s 0,... ) {x j } = ( s 2, s 1, s 2, s 0, s 0,... ) { {x j } = ( s 0, s 5, s 3, s 2, s 0,... ) 6 0 4 2 6 0 0 0 4 4 2 1 (2 2)E s 4 Es 3 (2 + 2)Es 2 2E s 2 4 3 1 7 5 1 7 3 5 6 7 0 1 5
84 Euclidean Distance Let {x j } and { x j} denote two sequences in A. The Euclidean distance between {x j } and {x j } is ( ) d {x j }, {x j} = ( ) 1/2 xj x 2 j j=0 Example: 1 continued ( ) d {x j }, {x j} ( ) d {x j }, {x j } = 4E s = 2 E s = (2 + (2 2) + 2 ) Es = 2.1414 E s Free Euclidean Distance The minimum Euclidean distance between pairs of admissible sequences of a TCM scheme is called the free Euclidean distance of the TCM scheme: ( ) d f = min d {x j }, {x j} {x j },{x j } A x j x j Example: 1 continued It can be shown that ( ) = d {x j }, {x j} d f = 2 E s
85 Probability of a Sequence Error Probability that the Viterbi decoder makes a wrong decision on the path in the trellis: ( df ) N(d f ) Q 2N0 P e where (i) Q(z) is the Gaussian error function Q(z) = z 1 exp { 12 } 2π u2 du (ii) N(d f ) is the average number of sequence pairs in A that have the Euclidean distance d f. Remark The above lower bound for P b is tight for high SNR. The reason is that in this case an erroneously detected path has a sequence likely to have an Euclidean distance to the true transmitted sequence equal to d f, when ML decoding is applied. Remember that given a finite observed sequence, {y j } = ( y0, y 1,..., y L 1 ), the Viterbi algorithm (which realizes the ML decoder) searches for the finite sequence generated by the trellis (with initial state 0) which minimizes L 1 j 0 y j x j 2
86 Optimality Criterion Optimality criterion for finding a vector mapping for a given convolutional code: Maximize the free Euclidean distance! TCM schemes maximizing the free Euclidean Distance among all TCM with a specified memory length are called optimal. The method proposed by Ungerboeck to identify good vector mapping given a certain convolutional encoder (or equivalently, given a certain trellis) relies on the technique of set partitioning.
87 4.4 Set Partitioning Example: 8PSK Es d 0 = (2 2)E s B 0 0 1 B 1 v 1 sqrte s d 1 = 2E s C 0 0 1 C 2 C 1 0 C 3 1 v 2 d 2 = 2 E s D 0 0 1 D 4 D 2 0 1 D 6 D 1 0 1 D 5 D 3 0 1 D 7 v 3 000 100 010 110 001 101 011 111 Goals of set partitioning: 1. The subsets should be similar. 2. The points inside each subset should be maximally separated.
88 4.5 Trellis Coded Modulation General form of the encoding process: U j,1.... U j,k1 U j,k1 +1.... Convolutional Encoder r = K 1 N 1 V j,1........ V j,n1 1 V j,n1 V j,n1 +1 Vector Mapping Select subset {1, 2,..., 2 N 1 } Select vector from subset X j V j,n U j,k {1, 2,..., 2 K K 1 } Remarks 1. Number of states in the trellis : 2 ν 2. Number of branches leaving each state : 2 K 1 3. Number of parallel branches in each transition : 2 K K 1
89 Example: 1 continued Parameters: K = 2, K 1 = 1, N 1 = 2, N = 3 V j,1 0 1 0 1 0 1 0 1 U j,1 T T U j,2 V j,2 V j,3 0 0 1 1 0 0 1 1 0 0 0 0 1 1 1 1 0 1 2 3 4 5 6 7 xj C 0 C 2 (0, 4) (2, 6) 6 0 4 2 0 0 4 4 C 1 (1, 5) C 3 (3, 7) 6 0 1 2 C 2 C 0 (2, 6) (0, 4) 2 4 3 1 7 5 1 7 3 C 3 (3, 7) C 1 (1, 5) 5
90 Rules for the Assignments Experimentally inferred rules for assigning the signal vectors to the transitions: 1. All vectors in X are used with the same frequency. 2. The vectors assigned to branches originating from or merging to the same state belong to the same subset at level 1. 3. Vectors with maximum Euclidean distances are assigned to parallel branches. Remarks Rule 1 guarantees that the trellis code has a regular structure. Rule 2 and Rule 3 guarantee that the minimum distance between paths in the trellis which diverge from any state and re-emerge later exceeds the free Euclidean distance of the uncoded modulation scheme. Example: 1 continued (d f ) 2 coded 8PSK = 4E s 2E s = (d f ) 2 uncoded 4PSK
91 Example: 2 continued Consider the following trellis code: U j,1 T U j 1,1 U j,2 T U j 2,2 V j,1 0 1 0 1 0 1 0 1 V j,2 0 0 1 1 0 0 1 1 V j,3 0 0 0 0 1 1 1 1 0 1 2 3 4 5 6 7 X j Despite the fact that this trellis code has no parallel branches, its free Euclidean distance is smaller than that of the trellis code of Example 1. (See Exercise 4.1).
92 4.6 Examples for TCM In the following, some examples of specific TCM schemes are given and their error probabililites are discussed.
93 TCM with 8PSK and 2 bits/t
94 Implementation
95 Probability of error-event and of bit-error
96
97 TCM with 16QAM and 3 bits/t Set Partitioning
98
99 Error event probability
100 CCITT V.32 standard CCITT V.32 standard for 9.6 Kbit/s and 14.4 bit/s modems. Convolutional Coder The code is designed to guarantee invariance to phase rotations of ±π/2 and π. These values are the phase ambiguities which result when a phase-locked loop (PLL) is employed for carrierphase estimation.
101 Mapping
102 A ECB Representation of a Band- Pass Signal Band-pass signals can equivalently be represented as complexvalued base-band signals. This is often called the equivalent complex base-band (ECB) representation. This concept is discussed in more detail in the following.
103
X n,1 X n,2 1 2T N δ(t nt ) n=0 Assumption: p(f) B 1 T 1/T 1 2T p(t) p(t) f 0 1 2T 1 T s i t T 1 T s i t T X 1 (t) 2 cos(2πf0 t) 2 sin(2πf 0 t) X 2 (t) X n,1 r n r n X(t) W (t) Y (t) V n,1 V n,2 Y n,1 X n,2 Y n,2 2 cos(2πf0 t) 1 T s i t T 2 sin(2πf 0 t) 1 T s i t T p( t) p( t) nt Y n,1 = r n X n,1 + V n,1 Y n,2 = r n X n,2 + V n,2 Y n,1 Y n,2 A.1 The Block diagram 104
105 Complex discrete-time representation: X n = X n,1 + jx n,2 V n Y n = Y n,1 + jy n,2 X n r n Yn Y n = r n X n + V n V n = V n,1 + jv n,2
106 A.2 The Signal Consider a real-valued BP signal: x(t) = x 1 (t) 2 cos(2πf 0 t) x 2 (t) 2 sin(2πf 0 t) [x1 ][ = R{ (t) + jx 2 (t) 2 cos(2πf0 t) + j ] } 2 sin(2πf }{{} 0 t) = x(t) { = R x(t) 2 exp(j2πf 0 t) } {{} 2 exp(j2πf0 t) } The signal x(t) is a complex-valued LP signal, and it is called the ECB representation of x(t). The spectra X(f) = F { } x(t) { } X(f) = F x(t) are related by X(f) = 2 [ S(f f0 ) + S( f f 0 )] 2 The energies of the signals are E s = E s
107 A.3 Effect of QA Mod/Demod on Signal x(t) = x 1 (t) 2 cos(2πf 0 t) x 2 (t) 2 sin(2πf 0 t) x 1 (t) = i x i,1 δ(t it ) p(t) x 2 (t) = i x i,2 δ(t it ) p(t) Assumption: p(t) is a LP signal, i.e., its Fourier transform P (f) = F{p(t)} has the property Tricks: In-phase component: P (f) = 0 for f > f 0 2 cos 2 x = 1 + cos 2x 2 sin 2 x = 1 cos 2x 2 sin x cos x = sin 2x z 1 (t) = x(t) 2 cos(2πf 0 t) p( t) [ ] = x 1 (t) 2 cos 2 (2πf 0 t) x }{{} 2 (t) 2 sin(2πf 0 t) cos(2πf 0 t) p( t) }{{} 1 + cos(4πf 0 t) sin(4πf 0 t) [ ] = x 1 (t) p( t) + x 1 (t) cos(4πf 0 t) x 2 (t) sin(4πf 0 t) p( t) }{{}}{{} BP LP }{{} = 0
108 z 1 (t) = x 1 (t) p( t) = i = i x i,1 δ(t it ) p(t) p( t) }{{} = R p (t) x i,1 R p (t it ) After sampling: z 1,n = z 1 (nt ) = i x i,1 R p (nt it ) }{{} R p ( (n i)t ) = rn i = i x i,1 r n i = x 1,n r n r n = R p (nt ) Quadrature component: (...similar calculations...) z n,2 = z 2 (nt ) = X n,2 r n
x 1 (t) A A x 2 (t) x 1 (t) 2cos(2πf 0 t)...cos(2πf 0 t) A A/2 A A/2 x 2 (t)( 2)sin(2πf 0 t) A 2 2 x 1 (t) 2cos(2πf 0 t) + x 2 (t)( 2)sin(2πf 0 t) A A/2 2f 0 f 0 f 0 2f 0 ja 2 2 A A A/2 A/2...( 2)sin(2πf 0 t) p LP (t) A A...p LP (t) 109
110 A.4 Effect of QA Demodulation on Noise Block Diagram 2cos(2πf0 t) Z 1 (t) p( t) Y 1 (t) Y 1 (nt ) = Y n,1 W (t) Z 2 (t) p( t) Y 2 (t) Y 2 (nt ) = Y n,2 2sin(2πf0 t) w(t): WGN with S w (f) = N 0 2 Noise before LP filtering Z 1 (t) = W (t) 2 cos(2πf 0 t) E [ Z 1 (t) Z 1 (t + τ) ] = = E [ W (t)w (t + τ) ] ( 2) 2 cos ( 2πf }{{} 0 t ) cos ( 2πf 0 (t + τ) ) = N 0 2 δ(τ) = N 0 2 δ(τ) 2 cos2 (2πf 0 t) }{{} 1 + cos(4πf 0 t) = N 0 2 δ(τ) + N 0 2 δ(τ) cos(4πf 0t) = R Z1 (τ; t) }{{} time-variant past Time-variant (periodic) auto-correlation function
111 Noise after LP filtering Assumption: p(t) is a LP signal, i.e., its Fourier transform P (f) = F{p(t)} has the property P (f) = 0 for f > f 0 Autocorrelation function of Y 1 (t): [ ] E Y 1 (t)y 1 (t + τ) Y 1 (t) = W (t) 2 cos(2πf 0 t) p(t) = = E[ W (t s) 2 cos ( 2πf 0 (t s) ) p(s)ds W (t + τ s ) 2 cos ( 2πf 0 (t + τ s ) ) ] p(s )ds [ ] = E } w(t s) w(t + τ s ) {{} N 0 2 δ(τ s + s) 2 cos ( 2πf 0 (t s) ) cos ( 2πf 0 (t + τ s ) ) p(s) p(s ) ds ds Integrate w.r.t. s 0 for τ s = s s = τ + s = N 0 2 cos 2( 2πf 0 (t s) ) p(s) p(τ + s) ds 2 }{{} 1 + cos ( 4πf 0 (t s) ) = N 0 2 R p(τ) + N 0 2 cos ( 4πf 0 (t s) ) p(s) p(τ + s) ds }{{} = cos ( 4πf 0 (t τ) ) p(τ) p( τ)
112 [ ] E Y 1 (t) Y 1 (t + τ) = N 0 2 R p(τ) + N 0 2 cos ( 4πf 0 (t s) ) p(s) p(τ + s) ds } {{ } (1) R p (τ) = p(τ) p( τ) (1) = [ cos ( 4πf 0 (t τ) ) ] p(τ) p( τ) (6) The Fourier transform w.r.t. τ of cos ( 4πf 0 (t τ) ) = cos ( 4πf 0 (τ t) ) = cos ( 4πf 0 τ ) δ(τ t) is 1 [ ] δ(f 2f 0 ) + δ(f + 2f 0 ) exp( j2πf t) = 2 = 1 [ ] δ(f 2f 0 ) exp( j4πf 0 t) + δ(f + 2f 0 ) exp(j4πf 0 t) 2 = (3) Consider the Fourier transform w.r.t. τ of (6): [ ] (2) = (3) P (f) P ( f) = 1 [ P (f 2f 0 ) exp( j4πf 0 t) 2 ] + P (f + 2f 0 ) exp(j4πf 0 t) P ( f) = 0 because P (f) = 0 for f > f 0 according to the assumption.
113 [ ] E Y 1 (t) Y 1 (t + τ) }{{} R Y1 (τ) = N 0 2 R p(τ) R Y1 (τ) = N 0 2 R p(τ) A.5 Noise After LP Filtering and Sampling R Y1 (kt ) = N 0 2 R p(kt ) }{{} = r k = N 0 2 r k