Digital Modulation 1
|
|
- Solomon Parks
- 6 years ago
- Views:
Transcription
1 Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27
2 i Contents I Basic Concepts 1 1 The Basic Constituents of a Digital Communication System 1.1 Discrete Information Sources Considered System Model The Digital Transmitter Waveform Look-up Table Waveform Synthesis Canonical Decomposition The Additive White Gaussian-Noise Channel The Digital Receiver Bank of Correlators Canonical Decomposition Analysis of the correlator outputs Statistical Properties of the Noise Vector The AWGN Vector Channel Vector Representation of a Digital Communication System Signal-to-Noise Ratio for the AWGN Channel Optimum Decoding for the AWGN Channel Optimality Criterion Decision Rules Decision Regions
3 ii 2.4 Optimum Decision Rules MAP Symbol Decision Rule ML Symbol Decision Rule ML Decision for the AWGN Channel Symbol-Error Probability Bit-Error Probability Gray Encoding II Digital Communication Techniques 5 3 Pulse Amplitude Modulation (PAM) BPAM Signaling Waveforms Decomposition of the Transmitter ML Decoding Vector Representation of the Transceiver Symbol-Error Probability Bit-Error Probability MPAM Signaling Waveforms Decomposition of the Transmitter ML Decoding Vector Representation of the Transceiver Symbol-Error Probability Bit-Error Probability
4 iii 4 Pulse Position Modulation (PPM) BPPM Signaling Waveforms Decomposition of the Transmitter ML Decoding Vector Representation of the Transceiver Symbol-Error Probability Bit-Error Probability MPPM Signaling Waveforms Decomposition of the Transmitter ML Decoding Symbol-Error Probability Bit-Error Probability Phase-Shift Keying (PSK) Signaling Waveforms Signal Constellation ML Decoding Vector Representation of the MPSK Transceiver Symbol-Error Probability Bit-Error Probability Offset Quadrature Phase-Shift Keying (OQPSK)1 6.1 Signaling Scheme Transceiver Phase Transitions for QPSK and OQPSK
5 iv 7 Frequency-Shift Keying (FSK) BFSK with Coherent Demodulation Signal Waveforms Signal Constellation ML Decoding Vector Representation of the Transceiver Error Probabilities MFSK with Coherent Demodulation Signal Waveforms Signal Constellation Vector Representation of the Transceiver Error Probabilities BFSK with Noncoherent Demodulation Signal Waveforms Signal Constellation Noncoherent Demodulator Conditional Probability of Received Vector ML Decoding MFSK with Noncoherent Demodulation Signal Waveforms Signal Constellation Conditional Probability of Received Vector ML Decision Rule Optimal Noncoherent Receiver for MFSK Symbol-Error Probability Bit-Error Probability
6 v 8 Minimum-Shift Keying (MSK) Motivation Transmit Signal Signal Constellation A Finite-State Machine Vector Encoder Demodulator Conditional Probability Density of Received Sequence ML Sequence Estimation Canonical Decomposition of MSK The Viterbi Algorithm Simplified ML Decoder Bit-Error Probability Differential Encoding and Differential Decoding Precoded MSK Gaussian MSK Quadrature Amplitude Modulation BPAM with Bandlimited Waveforms 17 III Appendix 171
7 vi A Geometrical Representation of Signals 172 A.1 Sets of Orthonormal Functions A.2 Signal Space Spanned by an Orthonormal Set of Functions173 A.3 Vector Representation of Elements in the Vector Space174 A.4 Vector Isomorphism A.5 Scalar Product and Isometry A.6 Waveform Synthesis A.7 Implementation of the Scalar Product A.8 Computation of the Vector Representation A.9 Gram-Schmidt Orthonormalization B Orthogonal Transformation of a White Gaussian Noise Vec B.1 The Two-Dimensional Case B.2 The General Case C The Shannon Limit 188 C.1 Capacity of the Band-limited AWGN Channel C.2 Capacity of the Band-Unlimited AWGN Channel. 191
8 vii References [1] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd ed. Prentice-Hall, 22. [2] B. Rimoldi, A decomposition approach to CPM, IEEE Trans. Inform. Theory, vol. 34, no. 2, pp , Mar
9 1 Part I Basic Concepts
10 2 ents 1 The Basic Constituents of a Digital Communication System Block diagram of a digital communication system: Binary Information Source Digital Transmitter Information Source Source Encoder U k 1/T b Vector Encoder Waveform Modulator X(t) 1/T Waveform Channel Binary Information Sink Digital Receiver 1/T b Information Source Vector Waveform Sink Decoder Decoder Demodulator Û k 1/T Y (t) Time duration of one binary symbol (bit) U k : T b Time duration of one waveform X(t): T Waveform channel may introduce - distortion, - interference, - noise Goal: Signal of information source and signal at information sink should be as similar as possible according to some measure.
11 3 Some properties: Information source: may be analog or digital Source encoder: may perform sampling, quantization and compression; generates one binary symbol (bit) U k {, 1} per time interval T b Waveform modulator: generates one waveform x(t) per time interval T Vector decoder: generates one binary symbol (bit) Ûk {, 1} (estimates of U k ) per time interval T b Source decoder: reconstructs the original source signal
12 4 1.1 Discrete Information Sources A discrete memoryless source (DMS) is a source that generates a sequence U 1, U 2,... of independent and identically distributed (i.i.d.) discrete random variables, called an i.i.d. sequence. PSfrag replacements DMS U 1, U 2,... Properties of the output sequence: discrete U 1, U 2,... {a 1, a 2,..., a Q }. memoryless U 1, U 2,... are statistically independent. stationary U 1, U 2,... are identically distributed. Notice: The symbol alphabet {a 1, a 2,..., a Q } has cardinality Q. The value of Q may be infinite, the elements of the alphabet only have to be countable. Example: Discrete sources (i) Output of the PC keyboard, SMS (usually not memoryless). (ii) Compressed file (near memoryless). (iii) The numbers drawn on a roulette table in a casino (ought to be memoryless, but may not... ).
13 5 A DMS is statistically completely described by the probabilities p U (a q ) = Pr(U = a q ) of the symbols a q, q = 1, 2,..., Q. Notice that (i) p U (a q ) for all q = 1, 2,..., Q, and (ii) Q p U (a q ) = 1. q=1 As the sequence is i.i.d., we have Pr(U i = a q ) = Pr(U j = a q ). For convenience, we may write p U (a) shortly as p(a). Binary Symmetric Source A binary memoryless source is a DMS with a binary symbol alphabet. Remark: Commonly used binary symbol alphabets are F 2 := {, 1} and B := { 1, +1}. In the following, we will use F 2. A binary symmetric source (BSS) is a binary memoryless source with equiprobable symbols, p U () = p U (1) = 1 2. Example: Binary Symmetric Source All length-k sequences of a BSS have the same probability p U1 U 2...U K (u 1 u 2... u K ) = K p U (u i ) = i=1 ( 1 2 ) K.
14 6 1.2 Considered System Model This course focuses on the transmitter and the receiver. Therefore, we replace the information source by a BSS Sfrag replacements Information Source Source Encoder U 1, U 2,... Binary Decoder Sink BSS U 1, U 2,... and the information sink by a binary (information) sink. Sfrag replacements BSS Information Sink Source Decoder Û 1, Û2,... Encoder U Binary 1, U 2,... Û 1, Sink Û2,...
15 7 Some Remarks (i) There is no loss of generality resulting from these substitutions. Indeed it can be demonstrated within Shannon s Information Theory that an efficient source encoder converts the output of an information source into a random binary independent and uniformly distributed (i.u.d.) sequence. Thus, the output of a perfect source encoder looks like the output of a BSS. (ii) Our main concern is the design of communication systems for reliable transmission of the output symbols of a BSS. We will not address the various methods for efficient source encoding.
16 8 Digital communication system considered in this course: BSS U Digital Transmitter X(t) bit rate 1/T b rate 1/T Waveform Channel Binary Sink Û Digital Receiver Y (t) Source: U = [U 1,..., U K ], U i {, 1} Transmitter: X(t) = x(t, u) Sink: Û = [Û1,..., ÛK], Ûi {, 1} Bit rate: 1/T b Rate (of waveforms): 1/T = 1/(KT b ) Bit error probability P b = 1 K K Pr(U k Ûk) k=1 Objective: Design of efficient digital communication systems Efficiency means { small bit error probability and high bit rate 1/T b.
17 9 Constraints and limitations: limited power limited bandwidth impairments (distortion, interference, noise) of the channel Design goals: good waveforms low-complexity transmitters low-complexity receivers
18 1 1.3 The Digital Transmitter PSfrag replacements Waveform PSfrag replacements Look-up Table PSfrag replacements Example: 4PPM (Pulse-Position Modulation) placements Set of four different waveforms, S = {s 1 (t), s 2 (t), s 3 (t), s 4 (t)}: s 1 (t) A T s 2 (t) A s 1 (t) t T s 3 (t) s 1 (t) A s 2 (t) t T s 1 (t) s 4 (t) s 2 (t) A s 3 (t) t Each waveform X(t) S is addressed by vector [U 1, U 2 ] {, 1} 2 : [U 1, U 2 ] X(t). The mapping may be implemented by a waveform look-up table. PSfrag replacements T t 4PPM Transmitter [U 1, U 2 ] [] s 1 (t) [1] s 2 (t) X(t) PSfrag replacements [1] s 3 (t) [11] s 4 (t) Example: [111] is transmitted as x(t) A T 2T 3T 4T t
19 11 ag replacements For an arbitrary digital transmitter, we have the following: Digital Transmitter [U 1, U 2,..., U K ] X(t) Waveform Look-up Table Input Binary vectors of length K from the input set U: [U 1, U 2,..., U K ] U := {, 1} K. The duration of one binary symbol U k is T b. Output Waveforms of duration T from the output set S: x(t) S := {s 1 (t), s 2 (t),..., s M (t)}. Waveform duration T means that for m = 1, 2,..., M, s m (t) = for t / [, T ]. The look-up table maps each input vector [u 1,..., u K ] U to one waveform x(t) S. Thus the digital transmitter may defined by a mapping U S with [u 1,..., u K ] x(t). The mapping is one-to-one and onto such that M = 2 K. Relation between signaling interval (waveform duration) T and bit interval ( bit duration ) T b : T = K T b.
20 Waveform Synthesis The set of waveforms, S := {s 1 (t), s 2 (t),..., s M (t)}, spans a vector space. Applying the Gram-Schmidt orthogonalization procedure, we can find a set of orthonormal functions S ψ := {ψ 1 (t), ψ 2 (t),..., ψ D (t)} with D M such that the space spanned by S ψ contains the space spanned by S. Hence, each waveform s m (t) can be represented by a linear combination of the orthonormal functions: s m (t) = D s m,i ψ i (t), i=1 m = 1, 2,..., M. Each signal s m (t) can thus be geometrically represented by the D-dimensional vector with respect to the set S ψ. s m = [s m,1, s m,2,..., s m,d ] T R D Further details are given in Appendix A.
21 13 placements Canonical Decomposition The digital transmitter may be represented by a waveform look-up table: u = [u 1,..., u K ] x(t), where u U and x(t) S. From the previous section, we know how to synthesize the waveform s m (t) from s m (see also Appendix A) with respect to a set of basis functions S ψ := {ψ 1 (t), ψ 2 (t),..., ψ D (t)}. Making use of this method, we can split the waveform look-up table into a vector look-up table and a waveform synthesizer: where u = [u 1,..., u K ] x = [x 1, x 2,..., x D ] x(t), u U = {, 1} K, x X = {s 1, s 2,..., s D } R D, x(t) S = {s 1 (t), s 2 (t),..., s M (t)}. This splitting procedure leads to the sought canonical decomposition of a digital transmitter: [U 1,..., U K ] Vector Encoder Vector Look-up Table X 1 Waveform Modulator ψ 1 (t) X(t) [... ] s 1 X D [1... 1] s M ψ D (t)
22 PSfrag replacements PSfrag replacements PSfrag replacements placements Example: 4PPM The four 2/ 2/ 2/ T lacements T r Encoder orthonormal T basis functions are ψ 1 (t) Vector 2/ ψ 1 (t) ψ 1 (t) ψ ψ 1 (t) 2(t) ψ 3 (t) ψ 4 (t) ψ 2 (t) odulator T ψ 2 (t) ψ 3 (t) t t t T T T T t 14 The canonical decomposition of the receiver is the following, where Es = A T /2. [U 1, U 2 ] Vector Encoder Vector Look-up Table [] [ E s,,, ] T [1] [, E s,, ] T [1] [,, E s, ] T [11] [,,, E s ] T X 1 X 2 X 3 X 4 Waveform Modulator ψ 1 (t) ψ 2 (t) ψ 3 (t) X(t) ψ 4 (t) Remarks: - The signal energy of each basis function is equal to one: ψd (t)dt = 1 (due to their construction). - The basis functions are orthonormal (due to their construction). - The value E s is chosen such that the synthesized signals have the same energy as the original signals, namely E s (compare Example 3).
23 The Additive White Gaussian-Noise Channel PSfrag replacements W (t) X(t) Y (t) The additive white Gaussian noise (AWGN) channel-model is widely used in communications. The transmitted signal X(t) is superimposed by a stochastic noise signal W (t), such that the transmitted signal reads Y (t) = X(t) + W (t). The stochastic process W (t) is a stationary process with the following properties: (i) W (t) is a Gaussian process, i.e., for each time t, the samples v = w(t) are Gaussian distributed with zero mean and variance σ 2 : p V (v) = 1 2πσ 2 exp( v2 2σ 2 ). (ii) W (t) has a flat power spectrum with height N /2: S W (f) = N 2 (Therefore it is called white.) (iii) W (t) has the autocorrelation function R W (τ) = N 2 δ(τ).
24 16 Remarks 1. Autocorrelation function and power spectrum: R W (τ) = E [ W (t)w (t + τ) ], S W (f) = F { R W (τ) }. 2. For Gaussian processes, weak stationarity implies strong stationarity. 3. An WGN is an idealized process without physical reality: The process is so wild that its realizations are not ordinary functions of time. Its power is infinite. However, a WGN is a useful approximation of a noise with a flat power spectrum in the bandwidth B used by a communication system: Sfrag replacements S noise (f) N /2 f B f B f 4. In satellite communications, W (t) is the thermal noise of the receiver front-end. In this case, N /2 is proportional to the squared temperature.
25 The Digital Receiver Consider a transmission system using the waveforms { } S = s 1 (t), s 2 (t),..., s M (t) with s m (t) = for t / [, T ], m = 1, 2,..., M, i.e., with duration T. Assume transmission over an AWGN channel, such that x(t) S. y(t) = x(t) + w(t), Bank of Correlators The transmitted waveform may be recovered from the received waveform y(t) by correlating y(t) with all possible waveforms: placements c m = y(t), s m (t), m = 1, 2,..., M. Based on the correlations [c 1,..., c M ], the waveform ŝ(t) with the highest correlation is chosen and the corresponding (estimated) source vector û = [û 1,..., û K ] is output. y(t) s 1 (t) s M (t) T (.)dt c 1 T (.)dt c M estimation of ˆx(t) and table look-up [û 1,..., û K ] Disadvantage: High complexity.
26 Canonical Decomposition Consider a set of orthonormal functions { } S ψ = ψ 1 (t), ψ 2 (t),..., ψ D (t) obtained by applying the Gram-Schmidt procedure to s 1 (t), s 2 (t),..., s M (t). Then, s m (t) = m = 1, 2,..., M. The vector D s m,d ψ d (t), d=1 s m = [s m,1, s m,2,..., s m,d ] T entirely determines s m (t) with respect to the orthonormal set S ψ. The set S ψ spans the vector space S ψ := { s(t) = D s i ψ i (t) : [s 1, s 2,..., s D ] R }. D i=1 Basic Idea The received waveform y(t) may contain components outside of S ψ. However, only the components inside S ψ are relevant, as x(t) S ψ. Determine the vector representation y of the components of y(t) that are inside S ψ. This vector y is sufficient for estimating the transmitted waveform.
27 19 Canonical decomposition of the optimal receiver for the AWGN channel Appendix A describes two ways to compute the vector representation of a waveform. Accordingly, the demodulator may be implemented in two ways, leading to the following two receiver structures. replacements Correlator-based Demodulator and Vector Decoder T (.)dt Y 1 Y (t) ψ 1 (t) Vector Decoder [Û1,..., ÛK] T (.)dt Y D replacements ψ D (t) Matched-filter based Demodulator and Vector Decoder Y (t) MF Y 1 ψ 1 (T t) Vector Decoder [Û1,..., ÛK] MF ψ D (T t) Y D T The optimality of the receiver structures is shown in the following sections.
28 Analysis of the correlator outputs Assume that the waveform s m (t) is transmitted, i.e., x(t) = s m (t). The received waveform is thus y(t) = x(t) + w(t) = s m (t) + w(t). The correlator outputs are T y d = y(t)ψ d (t)dt = T [ sm (t) + w(t)] ψ d (t)dt = T s m (t)ψ d (t)dt + T w(t)ψ d (t)dt d = 1, 2,..., D. }{{} s m,d = s m,d + w d, }{{} w d Using the notation y = [y 1,..., y D ] T, w = [w 1,..., w D ] T, w d = T w(t)ψ d (t)dt, we can recast the correlator outputs in the compact form y = s m + w. The waveform channel between x(t) = s m (t) and y(t) is thus transformed into a vector channel between x = s m and y.
29 21 Remark: Since the vector decoder sees noisy observations of the vectors s m, these vectors are also called signal points with respect to S ψ, and the set of signal points is called the signal constellation. From Appendix A, we know that s m (t) entirely characterizes s m (t). The vector y, however, does not represent the complete waveform y(t); it represents only the waveform y (t) = D y d ψ d (t) d=1 which is the part of y(t) within the vector space spanned by the set S ψ of orthonormal functions, i.e., y (t) S ψ. The remaining part of y(t), i.e., the part outside of S ψ, reads y (t) = y(t) y (t) D = y(t) y d ψ d (t) d=1 = s m (t) + w(t) = s m (t) D [s m,d + w d ]ψ d (t) d=1 D s m,d ψ d (t) +w(t) d=1 } {{ = } D = w(t) w d ψ d (t). d=1 D w d ψ d (t) d=1
30 22 Observations The waveform y (t) contains no information about s m (t). The waveform y (t) is completely represented by y. Therefore, the receiver structures do not lead to a loss of information, and so they are optimal.
31 Statistical Properties of the Noise Vector The components of the noise vector w = [w 1,..., w D ] T are given by w d = T w(t)ψ d (t)dt, d = 1, 2,..., D. The random variables W d are Gaussian as they result from a linear transformation of the Gaussian process W (t). Hence the probability density function of W d is of the form p Wd (w d ) = 1 exp ( (w d µ d ) 2 ) 2πσ 2. d where the mean value µ d and the variance σ 2 d 2σ 2 d are given by µ d = E [ W d ], σ 2 d = E [ (W d µ d ) 2]. These values are now determined. Mean value Using the definition of W d and taking into account that the process W (t) has zero mean, we obtain E [ ] [ T W d = E ] W (t)ψ d (t)dt = T E [ W (t) ] ψ d (t)dt =. Thus, all random variables W d have zero mean.
32 24 Variance For computing the variance, we use µ d =, the definition of W d, and the autocorrelation function of W (t), R W (τ) = N 2 δ(τ). We obtain the following chain of equations: E [ (W d µ d ) 2] = E [ ] Wd 2 [ T ( = E W (t)ψ d (t)dt )( T W (t )ψ d (t )dt )] = T T = N 2 = N 2 T T E [ W (t)w (t ) ] ψ }{{} d (t)ψ d (t )dtdt R W (t t) [ T ] δ(t t )ψ d (t )dt ψ d (t)dt } {{ } δ(t) ψ d (t) = ψ d (t) ψ 2 d(t)dt = N 2. Distribution of the noise vector components Thus we have µ d = and σ 2 d = N /2, and the distributions read p Wd (w d ) = 1 πn exp ( w2 d N ). for d = 1, 2,..., D. Notice that the components W d are all identically distributed.
33 25 As the Gaussian distribution is very important, the shorthand notation A N (µ, σ 2 ) is often used. This notation means: The random variable A is Gaussian distributed with mean value µ and variance σ 2. Using this notation, we have for the components of the noise vector d = 1, 2,..., D. W d N (, N /2), Distribution of the noise vector Using the same reasoning as before, we can conclude that W is a Gaussian noise vector, i.e., W N (µ, Σ), where the expectation µ R D 1 and the covariance matrix Σ R D D are given by µ = E [ W ], Σ = E [ (W µ)(w µ) T]. From the previous considerations, we have immediately µ =. Using a similar approach as for computing the variances, the statistical independence of the components of W can be shown. That is, E [ ] N W i W j = 2 δ i,j, where δ i,j denotes the Kronecker symbol defined as { 1 for i = j, δ i,j = otherwise.
34 26 As a consequence, the covariance matrix of W is Σ = N 2 I D, where I D denotes the D D identity matrix. To sum up, the noise vector W is distributed as ( W N, N ) 2 I D. As the components of W are statistically independent, the probability density function of W can be written as p W (w) = = = D p Wd (w d ) d=1 D d=1 1 πn exp ( w2 d N ) ( 1 πn ) D exp ( 1 N D d=1 ) wd 2 = ( πn ) D/2 exp ( 1 N w 2) with w denoting the Euclidean norm w = ( D d=1 ) 1/2 wd 2 of the D-dimensional vector w = [w 1,..., w D ] T. The independency of the components of W give rise to the AWGN vector channel.
35 The AWGN Vector Channel The additive white Gaussian noise vector channel is defined as a channel with input vector output vector and the relation where X = [X 1, X 2,..., X D ] T, Y = [Y 1, Y 2,..., Y D ] T, Y = X + W, W = [W 1, W 2,..., W D ] T is a vector of D independent random variables distributed as W d N (, N /2). PSfrag replacements W X Y (As nobody would have been expected differently... )
36 Vector Representation of a Digital Communication System Consider a digital communication system transmitting over an AWGN channel. Let s 1 (t), s 2 (t),..., s M (t), s m (t) = for t / [, T ], m = 1, 2,..., M, denote the waveforms. Let further S ψ = { ψ 1 (t), ψ 2 (t),..., ψ D (t) } denote a set of orthonormal functions for these waveforms. The canonical decompositions of the digital transmitter and the digital receiver convert the block formed by the waveform modulator, the AWGN channel, and the waveform demodulator into an AWGN vector channel. lacements Modulator, AWGN channel, demodulator X 1 W (t) T (.)dt Y 1 ψ 1 (t) X(t) Y (t) ψ 1 (t) X D T (.)dt Y D ψ D (t) ψ D (t) AWGN vector channel PSfrag replacements [W 1,..., W D ] [X 1,..., X D ] [Y 1,..., Y D ]
37 lacements Using this equivalence, the communication system operating over a (waveform) AWGN channel can be represented as a communication system operating over an AWGN vector channel: 29 BSS U Vector Encoder X W Binary Sink Û Vector Decoder Y AWGN Vector Channel Source vector: U = [U 1,..., U K ], U i {, 1} Transmit vector: X = [X 1,..., X D ] Receive vector: Y = [Y 1,..., Y D ] Estimated source vector: Û = [Û1,..., ÛK], U i {, 1} AWGN vector: W = [W 1,..., W D ], ( W N, N ) 2 I D This is the vector representation of a digital communication system transmitting over an AWGN channel.
38 3 For estimating the transmitted vector x, we are interested in the likelihood of x for the given received vector y: p Y X (y x) = p W (y x) Symbolically, we may write = ( πn ) D/2 exp ( 1 N y x 2). Y X = s m N ( s m, N ) 2 I D. Remark: The likelihood of x is the conditional probability density function of y given x, and it is regarded as a function of x. Example: Signal buried in AWGN Consider D = 3 and x = s m. Then the Gaussian cloud around s m may look like this: PSfrag replacements ψ 2 s m ψ 1 ψ 3
39 Signal-to-Noise Ratio for the AWGN Channel Consider a digital communication system using the set of waveforms { } S = s 1 (t), s 2 (t),..., s M (t), s m (t) = for t / [, T ], m = 1, 2,..., M, for transmission over an AWGN channel with power spectrum S W (f) = N 2. Let E sm denote the energy of waveform s m (t), and let E s denote the average waveform energy: E sm = T ( sm (t) ) 2 dt, Es = 1 M M m=1 E sm Thus, E s is the average energy per waveform x(t) and also the average energy per vector x (due to the one-to-one correspondence.) The average signal-to-noise ratio (SNR) per waveform is then defined as γ s = E s N.
40 32 On the other hand, the average energy per source bit u k is of interest. Since K source bits u k are transmitted per waveform, the average energy per bit is E b = 1 K E s. Accordingly, the average signal-to-noise ratio (SNR) per bit is then defined as γ b = E b N. As K bits are transmitted per waveform, we have the relation γ b = 1 K γ s. Remark: The energies and signal-to-noise ratios usually denote received values.
41 lacements 2 Optimum Decoding for the AWGN Channel 2.1 Optimality Criterion In the previous chapter, the vector representation of a digital transceiver transmitting over an AWGN channel was derived. 33 BSS U Vector Encoder X W Binary Sink Û Vector Decoder Y AWGN Vector Channel Nomenclature u = [u 1,..., u K ] U, x = [x 1,..., x D ] X, û = [û 1,..., û K ] U, y = [y 1,..., y D ] R D, U = {, 1} K, X = {s 1,..., s M }. U : set of all binary sequences of length K, X : set of vector representations of the waveforms. The set X is called the signal constellation, and its elements are called signals, signal points, or modulation symbols. For convenience, we may refer to u k as bits and to x as symbols. (But we have to be careful not to cause ambiguity.)
42 34 As the vector representation implies no loss of information, recovering the transmitted block u = [u 1,..., u K ] from the received waveform y(t) is equivalent to recovering u from the vector y = [y 1,..., y D ]. Functionality of the vector encoder enc : u x = enc(u) U X Functionality of the vector decoder dec u : y û = dec u (y) R D U Optimality criterion A vector decoder is called an optimal vector decoder if it minimizes the block error probability Pr(U Û). 2.2 Decision Rules Since the encoder mapping is one-to-one, the functionality of the vector decoder can be decomposed into the following two operations: (i) Modulation-symbol decision: dec : y ˆx = dec(y) R D X (ii) Encoder inverse: enc 1 : ˆx û = enc 1 (ˆx) X U
43 lacements 35 Thus, we have the following chain of mappings: y dec ˆx enc 1 û. Block Diagram U U Vector Encoder enc X X W Û U Encoder Inverse enc 1 ˆX X Mod. Symbol Decision dec Y R D R D Vector Decoder g replacements A decision rule is a function which maps each observation y R D to a modulation symbol x X, i.e., to an element of the signal constellation X. Example: Decision rule R D y (1) y (2) { } s 1, s 2,..., s M = X y (3)
44 Decision Regions A decision rule dec : R D X defines a partition of R D into decision regions g replacements D m := { } y R D : dec(y) = s m, m = 1, 2,..., M. Example: Decision regions R D D 1... { } s 1, s 2,..., s M = X D M D 2 Since the decision regions D 1, D 2,..., D M form a partition, they have the following properties: (i) They are disjoint: D i D j = for i j. (ii) They cover the whole observation space R D : D 1 D 2 D M = R D. Obviously, any partition of R D defines a decision rule as well.
45 Optimum Decision Rules Optimality criterion for decision rules Since the mapping enc : U X of the vector encoder is one-toone, we have Û U ˆX X. Therefore, the decision rule of an optimal vector decoder minimizes the modulation-symbol error probability Pr( ˆX X). Such a decision rule is called optimal. Sufficient condition for an optimal decision rule Let dec : R D X be a decision rule which minimizes the a-posteriori probability of making a decision error, Pr ( dec(y) }{{} ˆx X Y = y ), for any observation y. Then dec is optimal. Remark: The a-posteriori probability of a decoding error is frag replacements the probability of the event {dec(y) X} conditioned on the event {Y = y}: W ˆX = dec(y) Mod. Symbol Decision dec Y = y X
46 38 Proof: Let dec : R D X denote any decision rule, and let dec opt : R D X denote an optimal decision rule. Then we have Pr ( dec(y ) X ) ( = Pr dec(y ) X Y = y ) p Y (y)dy = ( ) = R D R D R D ( Pr dec(y) X Y = y ) p Y (y)dy ( Pr decopt (y) X ) Y = y py (y)dy ( Pr decopt (Y ) X Y = y ) p Y (y)dy R D = Pr ( dec opt (Y ) X ). ( ): by definition of dec opt, we have Pr ( dec(y) X Y = y ) Pr ( dec opt (y) X Y = y ) for all x X.
47 MAP Symbol Decision Rule Starting with the necessary condition for an optimal decision rule given above, we derive now a method to make an optimal decision. The corresponding decision rule is called the maximum a-posteriori (MAP) decision rule. Consider an optimal decision rule dec : y ˆx = dec(y), where the estimated modulation-symbol is denoted by ˆx. As the decision rule is assumed to be optimum, the probability is minimal. Pr(ˆx X Y = y) For any x X, we have Pr(X x Y = y) = 1 Pr(X = x Y = y) = 1 p X Y (x y) = 1 p Y X(y x) p X (x), p Y (y) where Bayes rule has been used in the last line. Notice that Pr(X = x Y = y) = p X Y (x y) is the a-posteriori probability of {X = x} given {Y = y}.
48 4 Using the above equalities, we can conclude that Pr(X x Y = y) is minimal p X Y (x y) is maximal p Y X (y x) p X (x) is maximal, since p Y (y) does not depend on x. Thus, the estimated modulation-symbol ˆx = dec(y) resulting from the optimal decision rule satisfies the two equivalent conditions p X Y (ˆx y) p X Y (x y) for all x X; p Y X (y ˆx) p X (ˆx) p Y X (y x) p X (x) for all x X. These two conditions may be used to define the optimal decision rule. Maximum a-posteriori (MAP) decision rule: { } ˆx = dec MAP (y) := argmax p X Y (x y) x X := argmax x X { } p Y X (y x) p X (x). The decision rule is called the MAP decision rule as the selected modulation-symbol ˆx maximizes the a-posteriori probability density function p X Y (x y). Notice that the two formulations are equivalent.
49 ML Symbol Decision Rule If the modulation-symbols at the output of the vector encoder are equiprobable, i.e., if Pr(X = s m ) = 1 M for m = 1, 2,..., M, the MAP decision rule reduces to the following decision rule. Maximum likelihood (ML) decision rule: { } ˆx = dec ML (y) := argmax x X p Y X (y x). In this case, the the selected modulation-symbol ˆx maximizes the likelihood function p Y X (y x) of x.
50 ML Decision for the AWGN Channel Using the canonical decomposition of the transmitter and of the receiver, we obtain the AWGN vector-channel: PSfrag replacements ( ) W N, N 2 I D X X Y R D The conditional probability density function of Y given X = x X is p Y X (y x) = ( πn ) D/2 exp ( 1 N y x 2). Since exp(.) is a monotonically increasing function, we obtain the following formulation of the ML decision rule { } ˆx = dec ML (y) = argmax p Y X (y x) x X = argmax {exp ( 1 y x 2)} N x X = argmin x X } { y x 2. Thus, the ML decision rule for the AWGN vector channel selects the vector in the signal constellation X that is the closest one to the observation y, where the distance measure is the squared Euclidean distance.
51 43 Implementation of the ML decision rule The squared Euclidean distance can be expanded as y x 2 = y 2 2 y, x + x 2. The term y 2 is irrelevant for the minimization with respect to x. Thus we get the following equivalent formulation of the ML decision rule: } ˆx = dec ML (y) = argmin { 2 y, x + x 2 x X } = argmax {2 y, x x 2 x X = argmax x X } { y, x 12 x 2. Sfrag replacements Block diagram of the ML decision rule for the AWGN vectorchannel: y s s 1 2. select largest one ˆx s M 1 2 s M 2 PSfragVector replacements correlator: a R D a, b = D d=1 a db d R b R D
52 44 Remark: Some signal constellations X have the property that x 2 has the same value for all x X. In these cases (but only then), the ML decision rule simplifies further to ˆx = dec ML (y) = argmax x X { } y, x. 2.8 Symbol-Error Probability The error probability for modulation-symbols X, or for short, the symbol-error probability is defined as P s = Pr( ˆX X) = Pr(dec(Y ) X). The symbol-error probability coincides with the block error probability ) P s = Pr(Û ([Û1, U) = Pr..., ÛK] [U 1,..., U K ]. We compute P s in two steps: (i) Compute the conditional symbol-error probability given X = x for any x X: Pr( ˆX X X = x). (ii) Average over the signal constellation: Pr( ˆX X) = x X Pr( ˆX X X = x) Pr(X = x).
53 45 Remember: Decision regions for the symbol decision rule: R D D 1... { } s 1, s 2,..., s M = X D M D 2 Assume that x = s m is transmitted. Then, Pr( ˆX X X = s m ) = Pr( ˆX s m X = s m ) = Pr(Y / D m X = s m ) = 1 Pr(Y D m X = s m ). For the AWGN vector channel, the conditional probability density function of Y given X = s m is Hence, p Y X (y s m ) = ( πn ) D/2 exp ( 1 N y s m 2). Pr(Y D m X = s m ) = p Y X (y s m )dy = D m = ( ) D/2 ( 1 πn exp y s m 2) dy. N D m The symbol-error probability is then obtained by averaging over the signal constellation: Pr( ˆX M X) = 1 Pr(Y D m X = s m ) Pr(X = s m ). m=1
54 46 If the modulation symbols are equiprobable, the formula simplifies to Pr( ˆX X) = 1 1 M Pr(Y D m X = s m ). M m=1 2.9 Bit-Error Probability The error probability for the source bits U k, or for short, the biterror probability is defined as P b = 1 K K Pr(Ûk U k ), k=1 i.e., as the average number of erroneous bits in the block Û = [Û1,..., ÛK]. Relation between the bit-error probability and the symbol-error probability Clearly, the relation between P b and P s depends on the encoding function enc : u = [u 1,..., u k ] x = enc(u). The analytical derivation of the relation between P b and P s is usually cumbersome. However, we can derive an upper bound and a lower bound for P b when P s is given (and also vice versa). These bounds are valid for any encoding function. The lower bound provides a guideline for the design of a good encoding function.
55 47 Lower bound: If a symbol is erroneous, there is at least one erroneous bit in the block of K bits. Therefore P b 1 K P s. Upper bound: If a symbol is erroneous, there are at most K erroneous bits in the block of K bits. Therefore P b P s. Combination: Combining the lower bound and the upper bound yields 1 K P s P b P s. An efficient encoder should achieve the lower bound for the biterror probability, i.e., P b 1 K P s. Remark: The bounds can also be obtained by considering the union bound of the events {Û1 U 1 }, {Û2 U 2 },..., {ÛK U K }, i.e, by relating the probability ( ) Pr {Û1 U 1 } {Û2 U 2 }... {ÛK U K } to the probabilities Pr ( {Û1 U 1 } ), Pr ( {Û2 U 2 } ),..., Pr ( {ÛK U K } ).
56 Gray Encoding frag replacements In Gray encoding, blocks of bits, u = [u 1,..., u K ], which differ in only one bit u k are assigned to neighboring vectors (modulation symbols) x in the signal constellation X. Example: 4PAM (Pulse Amplitude Modulation) Signal constellation: X = {[ 3], [ 1], [+1], [+3]} s 1 2 s 2 s 3 2 s 4 Motivation When a decision error occurs, it is very likely that the transmitted symbol and the detected symbol are neighbors. When the bit blocks of neighboring symbols differ only in one bit, there will be only one bit error per symbol error. Effect Gray encoding achieves P b 1 K P s for high signal-to-noise ratios (SNR).
57 49 Sketch of the proof: We consider the case K = 2, i.e., U = [U 1, U 2 ]. P s = Pr( ˆX X) ( ) = Pr [Û1, Û2] [U 1, U 2 ] ( ) = Pr {Û1 U 1 } {Û2 U 2 } = Pr(Û1 U 1 ) + Pr(Û2 U 2 ) }{{} 2 P b ( Pr ) {Û1 U 1 } {Û2 U 2 }. }{{} ( ) The term ( ) is small compared to the other two terms if the SNR is high. Hence P s 2 P b.
58 5 Part II Digital Communication Techniques
59 3 Pulse Amplitude Modulation (PAM) In pulse amplitude modulation (PAM), the information is conveyed by the amplitude of the transmitted waveforms BPAM PAM with only two waveforms is called binary PAM (BPAM) Signaling Waveforms PSfrag replacements Set of two rectangular waveforms with amplitudes A and A over t [, T ], S := { s 1 (t), s 2 (t) } : s 1 (t) s 2 (t) A A A T t A T t As S = 2, one bit is sufficient to address the waveforms (K = 1). Thus, the BPAM transmitter performs the mapping with U x(t) U U = {, 1} and x(t) S. Remark: In the general case, the basic waveform may also have another shape.
60 Decomposition of the Transmitter Gram-Schmidt procedure ψ 1 (t) = 1 E1 s 1 (t) with E 1 = T ( s1 (t) ) T 2 dt = A 2 dt = A 2 T. Thus ψ 1 (t) = { 1 T for t [, T ], otherwise. Both s 1 (t) and s 2 (t) are a linear combination of ψ(t): s 1 (t) = + E s ψ 1 (t) = +A T ψ 1 (t), s 2 (t) = E s ψ 1 (t) = A T ψ 1 (t), where E s = A T and E s = E 1 = E 2. (The two waveforms have the same energy.) Signal constellation Vector representation of the two waveforms: s 1 (t) s 1 = s 1 = + E s PSfrag replacements s 2 (t) s 2 = s 2 = E s Thus the signal constellation is X = { E s, + E s }. Es Es ψ 1 s 2 s 1
61 53 Vector Encoder The vector encoder is usually defined as { + E s for u =, x = enc(u) = E s for u = 1. The signal constellation comprises only two waveforms, and thus per symbol and per bit, the average transmit energies and the SNRs are the same: E b = E s, γ s = γ b ML Decoding ML symbol decision rule dec ML (y) = Sfrag replacements Decision regions ψ 1 argmin y x 2 = x { E s,+ E s } { + E s for y, E s for y <. D 2 = {y R : y < }, D 1 = {y R : y }. D 2 D 1 s 2 = E s s 1 = + E s y Remark: The decision rule dec ML (y) = is also a ML decision rule. { + E s for y >, E s for y.
62 54 The mapping of the encoder inverse reads { û = enc -1 for ˆx = + E s, (ˆx) = 1 for ˆx = E s. lacements Combining the (modulation) symbol decision rule and the encoder inverse yields the ML decoder for BPAM: { for y, û = 1 for y < Vector Representation of the Transceiver BPAM transceiver: lacements BSS U Encoder + E s 1 E s X ψ 1 (t) X(t) W (t) Sink Û Decoder D 1 D 2 1 Y 1 T T (.)dt Y (t) 1 T T (.)dt Vector representation: BSS U Encoder + E s 1 E s X W N (, N /2) Sink Û Decoder D 1 D 2 1 Y
63 Symbol-Error Probability Conditional probability densities p Y X (y + E s ) = 1 πn exp ( 1 N y E s 2) p Y X (y E s ) = 1 πn exp ( 1 N y + E s 2) Conditional symbol-error probability Assume that X = x = + E s is transmitted. Pr( ˆX X X = + E s ) = Pr(Y D 2 X = + E s ) = Pr(Y < X = + E s ) = 1 πn = 1 2π exp ( 1 N y E s 2) dy 2Es /N where the last equality results from substituting z = (y E s )/ N /2. exp ( 1 2 z2) dz, The Q-function is defined as Q(v) := q(z)dz, q(z) = 1 2π exp ( 1 2 z2). v
64 56 The function q(z) denotes a Gaussian pdf with zero mean and variance 1; i.e., it may be interpreted as the pdf of a random variable Z N (, 1). Using this definition, the conditional symbol-error probabilities may be written as Pr( ˆX X X = + ( 2E ) s E s ) = Q, N Pr( ˆX X X = ( 2E ) s E s ) = Q. N The second equation follows from symmetry. Symbol-error probability The symbol-error probability results from averaging over the conditional symbol-error probabilities: P s = Pr( ˆX X) = Pr(X = + E s ) Pr( ˆX X X = + E s ) + Pr(X = E s ) Pr( ˆX X X = E s ) ( 2E ) s = Q = Q( 2γ s ) N as Pr(X = + E s ) = Pr(X = E s ) = 1/2. Alternative method to derive the conditional symbolerror probability Assume again that X = E s is transmitted. Instead of considering the conditional pdf of Y, we directly use the pdf of the white Gaussian noise W.
65 57 Pr( ˆX X X = + E s ) = Pr(Y < X = + E s ) = Pr( E s + W < X = + E s ) = Pr(W < E s X = + E s ) = Pr(W < E s ) = Pr(W < 2E s /N ) = Q( 2E s /N ), where we substituted W = W/ N /2; notice that W N (, 1). A plot of the symbol-error probability can found in [1, p. 412]. The initial event (here {Y < }) is transformed into an equivalent event (here {W < 2E s /N }), of which the probability can easily be calculated. This method is frequently applied to determine symbol-error probabilities Bit-Error Probability In BPAM, each waveform conveys one source bit. Accordingly, each symbol-error leads to exactly one bit-error, and thus P b = Pr(Û U) = Pr( ˆX X) = P s. Due to the same reason, the transmit energy per symbol is equal to the transmit energy per bit, and so also the SNR per symbol is equal to the SNR per bit: E b = E s, γ s = γ b. Therefore, the bit-error probability (BEP) of BPAM results as P b = Q( 2γ b ). Notice: P b 1 5 is achieved for γ b 9.6 db.
66 MPAM In the general case of M-ary pulse amplitude modulation (MPAM) with M = 2 K, input blocks of K bits modulate the amplitude of a unique pulse Signaling Waveforms Waveform encoder: u = [u 1,..., u K ] x(t) U = {, 1} K S = { s 1 (t),..., s M (t) }, where s m (t) = { (2m M 1) A g(t) for t [, T ], otherwise, A R, and g(t) is a predefined pulse of duration T, g(t) = for t / [, T ]. Values of the factors (2m M 1)A that are weighting g(t): (M 1)A, (M 3)A,..., A, A,..., (M 3)A, (M 1)A. Example: 4PAM with rectangular pulse. Using the rectangular pulse { 1 for t [, T ], g(t) = otherwise, the resulting waveforms are S = { 3Ag(t), Ag(t), Ag(t), 3Ag(t) }.
67 Decomposition of the Transmitter Gram-Schmidt procedure: Orthonormal function ψ(t) = 1 E1 s 1 (t) = 1 Eg g(t), E 1 = T ( s1 (t) ) 2 dt, Eg = T( ) 2dt. g(t) Every waveform in S is a linear combination of (only) ψ(t): s m (t) = (2m M 1) A E }{{} g ψ(t) d = (2m M 1)d ψ(t). Signal constellation s 1 (t) s 1 = (M 1)d s 2 (t) s 2 = (M 3)d s m (t) s m = (2m M 1)d s M 1 (t) s M 1 = (M 3)d s M (t) s M = (M 1)d X = { s 1, s 2,..., s M } = { (M 1)d, (M 3)d,..., d, d,......, (M 3)d, (M 1)d }.
68 6 Example: M = 2, 4, 8 1 s 1 s s 1 s 2 s 3 s s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 Vector Encoder The vector encoder enc : u = [u 1,..., u K ] x = enc(u) U = {, 1} K X = { (M 1)d,..., (M 1)d } is any Gray encoding function. Energy and SNR The source bits are generated by a BSS, and so the symbols of the signal constellation X are equiprobable: for all m = 1, 2,..., M. Pr(X = s m ) = 1 M
69 61 The average symbol energy results as Thus, E s = E [ X 2] = 1 M = d2 M M (s m ) 2 m=1 M (2m M 1) 2 = d2 M m=1 E s = M 2 1 d 2 d = M(M 2 1) 3E s M 2 1 As M = 2 K, the average energy per transmitted source bit results as E b = E s K = M log2 M 3 log 2 M d2 d = M 2 1 E b. The SNR per symbol and the SNR per bit are related as γ b = 1 K γ s = 1 log 2 M γ s.
70 62 PSfrag replacements ML Decoding ML symbol decision rule dec ML (y) = argmin x X y x 2 Decision regions Example: Decision regions for M = 4. D 1 D 2 D 3 D 4 y s 1 s 2 s 3 s 4 2d 2d Starting from the special cases M = 2, 4, 8, the decision regions can be inferred: D 1 = {y R : y (M 2)d}, D 2 = {y R : (M 2)d < y (M 4)d}, D m = {y R : (2m M 2)d < y (2m M)d}, D M 1 = {y R : (M 4)d < y (M 2)d}, D M = {y R : (M 2)d < y}. Accordingly, the ML symbol decision rule may also be written as dec ML (y) = s m for y D m. ML decoder for MPAM û = enc -1 (dec ML (y))
71 Vector Representation of the Transceiver BPAM transceiver: BSS U Gray Encoder u m s m X ψ(t) X(t) W (t) lacements Sink Û Decoder D m u m Y 1 T T (.)dt Y (t) Vector representation: 1 T T (.)dt U BSS Gray Encoder X u m s m W N (, N /2) Sink Û Decoder D m u m Y Notice: u m denotes the block of source bits corresponding to s m, i.e., s m = enc(u m ), u m = enc -1 (s m ).
72 Symbol-Error Probability Conditional symbol-error probability Two cases must be considered: 1. The transmitted symbol is an edge symbol (a symbol at the edge of the signal constellation): X {s 1, s M }. 2. The transmitted symbol is not an edge symbol: X {s 2, s 3,..., s M 1 }. Case 1: X {s 1, s M } Assume first X = s 1. Then Pr( ˆX X X = s 1 ) = Pr(Y / D 1 X = s 1 ) = Pr(Y > s 1 + d X = s 1 ) = Pr(W > d) ( W = Pr N /2 > ( d ) = Q. N /2 d ) N /2 Notice that W/ N /2 N (, 1). By symmetry, Pr( ˆX ( d ) X X = s M ) = Q. N /2
73 65 Case 2: X {s 2,..., s M 1 } Assume X = s m. Then Pr( ˆX X X = s m ) = Pr(Y / D m X = s m ) = Pr(W < d or W > d) = Pr(W < d) + Pr(W > d) ( W = Pr N /2 < d ) N /2 ( W + Pr N /2 > d ) N /2 ( d ) ( d ) = Q + Q N /2 ( d = 2 Q N /2 ). N /2 Symbol-error probability Averaging over the symbols, one obtains P s = 1 M Pr( M ˆX X X = s m ) m=1 = 2 ( M Q d ) 2(M 2) ( + N /2 M Q d ) N /2 = 2 M 1 ( M Q d ). N /2 Expressing the symbol-error probability as a function of the SNR per symbol, γ s, or the SNR per bit, γ b, yields P s = 2 M 1 ( 6 ) M Q M 2 1 γ s = 2 M 1 ( 6 M Q log2 M ) M 2 1 γ b.
74 66 Plots of the symbol-error probabilities can be found in [1, p. 412]. Remark: When comparing BPAM and 4PAM for the same P s, the SNRs per bit, γ b, differ by 1 log 1 (5/2) 4 db. (Proof?) Bit-Error Probability When Gray encoding is used, we have the relation P b P s log 2 M for high SNR γ b = E b /N. = 2 M 1 M log 2 M Q ( 6 log2 M ) M 2 1 γ b
75 4 Pulse Position Modulation (PPM) In pulse position modulation (PPM), the information is conveyed by the position of the transmitted waveforms BPPM PPM with only two waveforms is called binary PPM (BPPM). PSfrag replacements Signaling Waveforms Set of two rectangular waveforms with amplitude A and duration T/2 over t [, T ], S := { s 1 (t), s 2 (t) } : s 1 (t) A s 2 (t) A A T 2 T t A T 2 T t As the cardinality of the set is S = 2, one bit is sufficient to address each waveforms, i.e., K = 1. Thus, the BPPM transmitter performs the mapping with U x(t) U U = {, 1} and x(t) S. Remark: In the general case, the basic waveform may also have another shape.
76 Decomposition of the Transmitter Gram-Schmidt procedure The waveforms s 1 (t) and s 2 (t) are orthogonal and have the same energy E s = T s 2 1(t)dt = Notice that E s = A T/2. T s 2 2(t)dt = A 2 T 2. The two orthonormal functions and the corresponding waveform representations are ψ 1 (t) = 1 Es s 1 (t), ψ 2 (t) = 1 Es s 2 (t), s 1 (t) = E s ψ 1 (t) + ψ 2 (t), s 2 (t) = ψ 1 (t) + E s ψ 2 (t). Signal constellation Vector representation of the two waveforms: s 1 (t) s 1 = ( E s, ) T, s 2 (t) s 2 = (, E s ) T. Thus the signal constellation is X = { ( E s, ) T, (, E s ) T}. PSfrag replacements ψ 2 Es s 2 2Es s 1 Es ψ 1
77 69 Vector Encoder The vector encoder is defined as { ( E s, ) T for u =, x = enc(u) = (, E s ) T for u = 1. The signal constellation comprises only two waveforms, and thus per (modulation) symbol and per (source) bit, the average transmit energies and the SNRs are the same: E b = E s, γ s = γ b ML Decoding ML symbol decision rule dec ML (y) = argmin y x 2 = x {s 1,s 2 } Notice: y s 1 2 y s 2 2 y 1 y 2. { ( E s, ) T for y 1 y 2, (, E s ) T for y 1 < y 2. PSfrag Decision replacements regions D 1 = {y R 2 : y 1 y 2 }, D 2 = {y R 2 : y 1 < y 2 }. 2Es y 2 Es D 2 s 2 D 1 s 1 Es y 1
78 7 Combining the (modulation) symbol decision rule and the encoder inverse yields the ML decoder for BPPM: { û = enc -1 for y 1 y 2, (dec ML (y)) = 1 for y 1 < y 2. PSfrag replacements The ML decoder may be implemented by Y 1 Y 2 Z z z < 1 Û Notice that Z = Y 1 Y 2 is then the decision variable.
79 Vector Representation of the Transceiver BPPM transceiver for the waveform AWGN channel ψ 1 (t) lacements BSS U Vector Encoder ( E s, ) T 1 (, E s ) T X 1 X 2 ψ 2 (t) X(t) W (t) Sink Û Vector Decoder D 1 D 2 1 Y 1 Y 2 2 T/2 (.)dt T 2 T (.)dt T T/2 Y (t) 2 T ψ 1 (t) ψ 2 (t) T/2 (.)dtvector representation including a AWGN vector channel T T/2 (.)dt Vector X 1 U Encoder BSS ( E s, ) T 1 (, E s ) T 2 T X 2 W 1 N (, N /2) Sink Û Vector Decoder D 1 D 2 1 Y 1 W 2 N (, N /2) Y 2
80 Symbol-Error Probability Conditional symbol-error probability Assume that X = s 1 = ( E s, ) T is transmitted. PSfrag replacements Pr( ˆX X X = s 1 ) = Pr(Y D 2 X = s 1 ) = Pr(s 1 + W D 2 X = s 1 ) = Pr(s 1 + W D 2 ) The noise Es vector /2 W may be represented in the coordinate system (ξ 1, ξ 2 ) or in the rotated coordinate system (ξ 1, ξ 2). The latter is more suited to computing the above probability, as only the noise component along the ξ 2-axis is relevant. ξ 2 D 2 ξ 2 s 2 Es w 2 D 1 s 1 + w w 2 w 1 ξ 1 α = π/4 Es s 1 w 1 ξ 1 The elements of the noise vector in the rotated coordinate system have the same statistical properties as in the non-rotated coordinate system (see Appendix B), i.e., W 1 N (, N /2), W 2 N (, N /2).
81 73 Thus, we obtain Pr( ˆX X X = s 1 ) = Pr(W 2 > E s /2) W 2 = Pr( N /2 > Es ) N = Q( E s /N ) = Q( γ s ). By symmetry, Pr( ˆX X X = s 2 ) = Q( γ s ). Symbol-error probability Averaging over the conditional symbol-error probabilities, we obtain P s = Pr( ˆX X) = Q( γ s ) Bit-Error Probability For BPPM, E b = E s, γ s = γ b, P b = P s (see above), and so the bit-error probability (BEP) results as P b = Pr(Û U) = Q( γ b ). A plot of the BEP versus γ b may be found in [1, p. 426] Remark Due to their signal constellations, BPPM is called orthogonal signaling and BPAM is called antipodal signaling. Their BEPs are given by P b,bppm = Q( γ b ), P b,bpam = Q( 2γ b ). When comparing the SNR for a certain BEP, BPPM requires 3 db more than BPAM. For a plot, see [1, p. 49]
82 MPPM In the general case of M-ary pulse position modulation (MPPM) with M = 2 K, input blocks of K bits determine the position of a unique pulse Signaling Waveforms Waveform encoder u = [u 1,..., u K ] x(t) U = {, 1} K S = { s 1 (t),..., s M (t) }, where ( (m 1)T ) s m (t) = g t, M m = 1, 2,..., M, and g(t) is a predefined pulse of duration T/M, g(t) = for t / [, T/M]. The waveforms are orthogonal and have the same energy as g(t), E s = E g = T g 2 (t)dt. PSfrag replacements Example: (M=4) PSfrag replacements PSfrag Consider replacements a rectangular pulse g(t) with amplitude A and duration T/4. Then, we have the set of four waveforms, S = placements {s 1 (t), s 2 (t), s 3 (t), s 4 (t)}: s 1 (t) A T s 2 (t) A s 1 (t) t T s 3 (t) s 1 (t) A s 2 (t) t T s 1 (t) s 4 (t) s 2 (t) A s 3 (t) t T t
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:
More informationa) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.
Digital Modulation and Coding Tutorial-1 1. Consider the signal set shown below in Fig.1 a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. b) What is the minimum Euclidean
More informationDigital Transmission Methods S
Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume
More informationDigital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10
Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,
More informationEE4304 C-term 2007: Lecture 17 Supplemental Slides
EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27 Geometric Representation: Optimal
More informationChapter 2 Signal Processing at Receivers: Detection Theory
Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication
More informationOne Lesson of Information Theory
Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/
More informationModulation & Coding for the Gaussian Channel
Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology
More informationA First Course in Digital Communications
A First Course in Digital Communications Ha H. Nguyen and E. Shwedyk February 9 A First Course in Digital Communications 1/46 Introduction There are benefits to be gained when M-ary (M = 4 signaling methods
More informationThis examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS
THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of
More informationData Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.
Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)
More informationCHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS
THE UNIVERSITY OF SOUTH AUSTRALIA SCHOOL OF ELECTRONIC ENGINEERING CHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS Philip Edward McIllree, B.Eng. A thesis submitted in fulfilment of the
More informationPerformance of small signal sets
42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable
More informationthat efficiently utilizes the total available channel bandwidth W.
Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Introduction We consider the problem of signal
More informationELEC546 Review of Information Theory
ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random
More informationEE 574 Detection and Estimation Theory Lecture Presentation 8
Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationParameter Estimation
1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive
More informationRevision of Lecture 5
Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information
More informationSignal Design for Band-Limited Channels
Wireless Information Transmission System Lab. Signal Design for Band-Limited Channels Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal
More informationBASICS OF DETECTION AND ESTIMATION THEORY
BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal
More informationSummary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations
TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error
More informationLecture 15: Thu Feb 28, 2019
Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationA Systematic Description of Source Significance Information
A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh
More informationDigital Modulation 2
Digital Modulation 2 Lecture Notes Ingmar Land and Bernard H. Fleury Department of Electronic Systems Aalborg University Version: November 15, 2006 i Contents 1 Continuous-Phase Modulation 1 1.1 General
More informationELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS
EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours
More informationECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)
ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling
More informationINFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,
More informationA Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.
Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets
More informationKevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.
Kevin Buckley - -4 ECE877 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set Review of Digital Communications, Introduction to
More informationConstellation Shaping for Communication Channels with Quantized Outputs
Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia
More informationThis examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS
THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 08 December 2009 This examination consists of
More informationCarrier frequency estimation. ELEC-E5410 Signal processing for communications
Carrier frequency estimation ELEC-E54 Signal processing for communications Contents. Basic system assumptions. Data-aided DA: Maximum-lielihood ML estimation of carrier frequency 3. Data-aided: Practical
More informationPrinciples of Communications
Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @
More informationEE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design
Chapter 4 Receiver Design Chapter 4 Receiver Design Probability of Bit Error Pages 124-149 149 Probability of Bit Error The low pass filtered and sampled PAM signal results in an expression for the probability
More informationEs e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0
Problem 6.15 : he received signal-plus-noise vector at the output of the matched filter may be represented as (see (5-2-63) for example) : r n = E s e j(θn φ) + N n where θ n =0,π/2,π,3π/2 for QPSK, and
More informationConstellation Shaping for Communication Channels with Quantized Outputs
Constellation Shaping for Communication Channels with Quantized Outputs, Dr. Matthew C. Valenti and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia University
More informationDirect-Sequence Spread-Spectrum
Chapter 3 Direct-Sequence Spread-Spectrum In this chapter we consider direct-sequence spread-spectrum systems. Unlike frequency-hopping, a direct-sequence signal occupies the entire bandwidth continuously.
More informationECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220
ECE 564/645 - Digital Communications, Spring 08 Midterm Exam # March nd, 7:00-9:00pm Marston 0 Overview The exam consists of four problems for 0 points (ECE 564) or 5 points (ECE 645). The points for each
More information392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions
392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6 Problem Set 2 Solutions Problem 2.1 (Cartesian-product constellations) (a) Show that if A is a K-fold Cartesian
More informationThe Performance of Quaternary Amplitude Modulation with Quaternary Spreading in the Presence of Interfering Signals
Clemson University TigerPrints All Theses Theses 1-015 The Performance of Quaternary Amplitude Modulation with Quaternary Spreading in the Presence of Interfering Signals Allison Manhard Clemson University,
More informationSupplementary Figure 1: Scheme of the RFT. (a) At first, we separate two quadratures of the field (denoted by and ); (b) then, each quadrature
Supplementary Figure 1: Scheme of the RFT. (a At first, we separate two quadratures of the field (denoted by and ; (b then, each quadrature undergoes a nonlinear transformation, which results in the sine
More information8 PAM BER/SER Monte Carlo Simulation
xercise.1 8 PAM BR/SR Monte Carlo Simulation - Simulate a 8 level PAM communication system and calculate bit and symbol error ratios (BR/SR). - Plot the calculated and simulated SR and BR curves. - Plot
More informationPerformance Analysis of Spread Spectrum CDMA systems
1 Performance Analysis of Spread Spectrum CDMA systems 16:33:546 Wireless Communication Technologies Spring 5 Instructor: Dr. Narayan Mandayam Summary by Liang Xiao lxiao@winlab.rutgers.edu WINLAB, Department
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationChapter 7: Channel coding:convolutional codes
Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication
More informationEE5713 : Advanced Digital Communications
EE5713 : Advanced Digital Communications Week 12, 13: Inter Symbol Interference (ISI) Nyquist Criteria for ISI Pulse Shaping and Raised-Cosine Filter Eye Pattern Equalization (On Board) 20-May-15 Muhammad
More informationCoding theory: Applications
INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts
More informationIntroduction to Convolutional Codes, Part 1
Introduction to Convolutional Codes, Part 1 Frans M.J. Willems, Eindhoven University of Technology September 29, 2009 Elias, Father of Coding Theory Textbook Encoder Encoder Properties Systematic Codes
More informationList Decoding: Geometrical Aspects and Performance Bounds
List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,
More informationMODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie
MODULATION AND CODING FOR QUANTIZED CHANNELS Xiaoying Shao and Harm S. Cronie x.shao@ewi.utwente.nl, h.s.cronie@ewi.utwente.nl University of Twente, Faculty of EEMCS, Signals and Systems Group P.O. box
More informationRevision of Lecture 4
Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical
More informationCommunication Theory Summary of Important Definitions and Results
Signal and system theory Convolution of signals x(t) h(t) = y(t): Fourier Transform: Communication Theory Summary of Important Definitions and Results X(ω) = X(ω) = y(t) = X(ω) = j x(t) e jωt dt, 0 Properties
More informationLecture 7. Union bound for reducing M-ary to binary hypothesis testing
Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing
More informationSolutions to Selected Problems
Solutions to Selected Problems from Madhow: Fundamentals of Digital Communication and from Johannesson & Zigangirov: Fundamentals of Convolutional Coding Saif K. Mohammed Department of Electrical Engineering
More informationBounds on Mutual Information for Simple Codes Using Information Combining
ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar
More informationSummary II: Modulation and Demodulation
Summary II: Modulation and Demodulation Instructor : Jun Chen Department of Electrical and Computer Engineering, McMaster University Room: ITB A1, ext. 0163 Email: junchen@mail.ece.mcmaster.ca Website:
More informationInformation Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals
Information Sources Professor A. Manikas Imperial College London EE303 - Communication Systems An Overview of Fundamentals Prof. A. Manikas (Imperial College) EE303: Information Sources 24 Oct. 2011 1
More informationReview of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β j exp[ 2πifτ j (t)] = exp[2πid j t 2πifτ o j ]
Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β exp[ 2πifτ (t)] = exp[2πid t 2πifτ o ] Define D = max D min D ; The fading at f is ĥ(f, t) = 1 T coh = 2D exp[2πi(d
More informationOutline Digital Communications. Lecture 02 Signal-Space Representation. Energy Signals. MMSE Approximation. Pierluigi SALVO ROSSI
Outline igital Communications Lecture 2 Signal-Space Representation Pierluigi SALVO ROSSI epartment of Industrial and Information Engineering Second University of Naples Via Roma 29, 8131 Aversa (CE, Italy
More informationInformation Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results
Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel
More informationANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM
ANAYSIS OF A PARTIA DECORREATOR IN A MUTI-CE DS/CDMA SYSTEM Mohammad Saquib ECE Department, SU Baton Rouge, A 70803-590 e-mail: saquib@winlab.rutgers.edu Roy Yates WINAB, Rutgers University Piscataway
More informationAppendix B Information theory from first principles
Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes
More informationEE 661: Modulation Theory Solutions to Homework 6
EE 66: Modulation Theory Solutions to Homework 6. Solution to problem. a) Binary PAM: Since 2W = 4 KHz and β = 0.5, the minimum T is the solution to (+β)/(2t ) = W = 2 0 3 Hz. Thus, we have the maximum
More informationSimultaneous SDR Optimality via a Joint Matrix Decomp.
Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s
More informationAnalysis of methods for speech signals quantization
INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs
More informationChapter [4] "Operations on a Single Random Variable"
Chapter [4] "Operations on a Single Random Variable" 4.1 Introduction In our study of random variables we use the probability density function or the cumulative distribution function to provide a complete
More informationComputation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems
Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems Department of Electrical Engineering, College of Engineering, Basrah University Basrah Iraq,
More informationNotes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel
Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic
More informationMemo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN
Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, 007 Chenggao HAN Contents 1 Introduction 1 1.1 Elements of a Digital Communication System.....................
More informationFlat Rayleigh fading. Assume a single tap model with G 0,m = G m. Assume G m is circ. symmetric Gaussian with E[ G m 2 ]=1.
Flat Rayleigh fading Assume a single tap model with G 0,m = G m. Assume G m is circ. symmetric Gaussian with E[ G m 2 ]=1. The magnitude is Rayleigh with f Gm ( g ) =2 g exp{ g 2 } ; g 0 f( g ) g R(G m
More informationMessage Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes
Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes Institute of Electronic Systems Signal and Information Processing in Communications Nana Traore Shashi Kant Tobias
More informationRapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping
Rapport technique #INRS-EMT-010-0604 Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping Leszek Szczeciński, Cristian González, Sonia Aïssa Institut National de la Recherche
More informationShannon Information Theory
Chapter 3 Shannon Information Theory The information theory established by Shannon in 948 is the foundation discipline for communication systems, showing the potentialities and fundamental bounds of coding.
More informationPhysical Layer and Coding
Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:
More informationEE456 Digital Communications
EE456 Digital Communications Professor Ha Nguyen September 5 EE456 Digital Communications Block Diagram of Binary Communication Systems m ( t { b k } b k = s( t b = s ( t k m ˆ ( t { bˆ } k r( t Bits in
More informationPSK bit mappings with good minimax error probability
PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department
More informationSquare Root Raised Cosine Filter
Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design
More informationBlock 2: Introduction to Information Theory
Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation
More informationA Family of Nyquist Filters Based on Generalized Raised-Cosine Spectra
Proc. Biennial Symp. Commun. (Kingston, Ont.), pp. 3-35, June 99 A Family of Nyquist Filters Based on Generalized Raised-Cosine Spectra Nader Sheikholeslami Peter Kabal Department of Electrical Engineering
More informationLECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood
ECE559:WIRELESS COMMUNICATION TECHNOLOGIES LECTURE 16 AND 17 Digital signaling on frequency selective fading channels 1 OUTLINE Notes Prepared by: Abhishek Sood In section 2 we discuss the receiver design
More informationRevision of Lecture 4
Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel
More informationCHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.
CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c
More informationECE Information theory Final (Fall 2008)
ECE 776 - Information theory Final (Fall 2008) Q.1. (1 point) Consider the following bursty transmission scheme for a Gaussian channel with noise power N and average power constraint P (i.e., 1/n X n i=1
More informationEE401: Advanced Communication Theory
EE401: Advanced Communication Theory Professor A. Manikas Chair of Communications and Array Processing Imperial College London Introductory Concepts Prof. A. Manikas (Imperial College) EE.401: Introductory
More informationImproved Detected Data Processing for Decision-Directed Tracking of MIMO Channels
Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels Emna Eitel and Joachim Speidel Institute of Telecommunications, University of Stuttgart, Germany Abstract This paper addresses
More informationSecure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel
Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu
More information3.9 Diversity Equalization Multiple Received Signals and the RAKE Infinite-length MMSE Equalization Structures
Contents 3 Equalization 57 3. Intersymbol Interference and Receivers for Successive Message ransmission........ 59 3.. ransmission of Successive Messages.......................... 59 3.. Bandlimited Channels..................................
More information5. Pilot Aided Modulations. In flat fading, if we have a good channel estimate of the complex gain gt, ( ) then we can perform coherent detection.
5. Pilot Aided Modulations In flat fading, if we have a good channel estimate of the complex gain gt, ( ) then we can perform coherent detection. Obtaining a good estimate is difficult. As we have seen,
More informationExample: Bipolar NRZ (non-return-to-zero) signaling
Baseand Data Transmission Data are sent without using a carrier signal Example: Bipolar NRZ (non-return-to-zero signaling is represented y is represented y T A -A T : it duration is represented y BT. Passand
More informationLecture 7 MIMO Communica2ons
Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10
More informationDetecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf
Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf Reading: Ch. 5 in Kay-II. (Part of) Ch. III.B in Poor. EE 527, Detection and Estimation Theory, # 5c Detecting Parametric Signals in Noise
More informationEE-597 Notes Quantization
EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both
More informationρ = sin(2π ft) 2π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero.
Problem 5.1 : The correlation of the two signals in binary FSK is: ρ = sin(π ft) π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero. Thus: ϑρ
More informationELEC E7210: Communication Theory. Lecture 10: MIMO systems
ELEC E7210: Communication Theory Lecture 10: MIMO systems Matrix Definitions, Operations, and Properties (1) NxM matrix a rectangular array of elements a A. an 11 1....... a a 1M. NM B D C E ermitian transpose
More informationCapacity and Capacity-Achieving Input Distribution of the Energy Detector
Capacity and Capacity-Achieving Input Distribution of the Energy Detector Erik Leitinger, Bernhard C. Geiger, and Klaus Witrisal Signal Processing and Speech Communication Laboratory, Graz University of
More informationProjects in Wireless Communication Lecture 1
Projects in Wireless Communication Lecture 1 Fredrik Tufvesson/Fredrik Rusek Department of Electrical and Information Technology Lund University, Sweden Lund, Sept 2018 Outline Introduction to the course
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More information