Advanced Digital Communications

Size: px
Start display at page:

Download "Advanced Digital Communications"

Transcription

1 Advanced Digital Communications Suhas Diggavi École Polytechnique Fédérale de Lausanne (EPFL) School of Computer and Communication Sciences Laboratory of Information and Communication Systems (LICOS) 3rd October 005

2

3 Contents I Review of Signal Processing and Detection 7 Overview 9. Digital data transmission Communication system blocks Goals of this class Class organization Lessons from class Signals and Detection 5. Data Modulation and Demodulation Mapping of vectors to waveforms Demodulation Data detection Criteria for detection Minmax decoding rule Decision regions Bayes rule for minimizing risk Irrelevance and reversibility Complex Gaussian Noise Continuous additive white Gaussian noise channel Binary constellation error probability Error Probability for AWGN Channels Discrete detection rules for AWGN Rotational and transitional invariance Bounds for M > Signal sets and measures Basic terminology Signal constellations Lattice-based constellation: Problems Passband Systems Equivalent representations Frequency analysis Channel Input-Output Relationships Baseband equivalent Gaussian noise Circularly symmetric complex Gaussian processes Gaussian hypothesis testing - complex case

4 4 CONTENTS 3.6 Problems II Transmission over Linear Time-Invariant channels 59 4 Inter-symbol Interference and optimal detection 6 4. Successive transmission over an AWGN channel Inter-symbol Interference channel Matched filter Noise whitening Maximum Likelihood Sequence Estimation (MLSE) Viterbi Algorithm Error Analysis Maximum a-posteriori symbol detection BCJR Algorithm Problems Equalization: Low complexity suboptimal receivers Linear estimation Orthogonality principle Wiener smoothing Linear prediction Geometry of random processes Suboptimal detection: Equalization Zero-forcing equalizer (ZFE) Performance analysis of the ZFE Minimum mean squared error linear equalization (MMSE-LE) Performance of the MMSE-LE Decision-feedback equalizer Performance analysis of the MMSE-DFE Zero forcing DFE Fractionally spaced equalization Zero-forcing equalizer Finite-length equalizers FIR MMSE-LE FIR MMSE-DFE Problems Transmission structures 9 6. Pre-coding Tomlinson-Harashima precoding Multicarrier Transmission (OFDM) Fourier eigenbasis of LTI channels Orthogonal Frequency Division Multiplexing (OFDM) Frequency Domain Equalizer (FEQ) Alternate derivation of OFDM Successive Block Transmission Channel Estimation Training sequence design Relationship between stochastic and deterministic least squares Problems

5 CONTENTS 5 III Wireless Communications 47 7 Wireless channel models Radio wave propagation Free space propagation Ground Reflection Log-normal Shadowing Mobility and multipath fading Summary of radio propagation effects Wireless communication channel Linear time-varying channel Statistical Models Time and frequency variation Overall communication model Problems Single-user communication Detection for wireless channels Coherent Detection Non-coherent Detection Error probability behavior Diversity Time Diversity Repetition Coding Time diversity codes Frequency Diversity OFDM frequency diversity Frequency diversity through equalization Spatial Diversity Receive Diversity Transmit Diversity Tools for reliable wireless communication Problems A Exact Calculations of Coherent Error Probability B Non-coherent detection: fast time variation C Error probability for non-coherent detector Multi-user communication Communication topologies Hierarchical networks Ad hoc wireless networks Access techniques Time Division Multiple Access (TDMA) Frequency Division Multiple Access (FDMA) Code Division Multiple Access (CDMA) Direct-sequence CDMA multiple access channels DS-CDMA model Multiuser matched filter Linear Multiuser Detection Decorrelating receiver

6 6 CONTENTS 9.4. MMSE linear multiuser detector Epilogue for multiuser wireless communications Problems IV Connections to Information Theory 0 Reliable transmission for ISI channels 3 0. Capacity of ISI channels Coded OFDM Achievable rate for coded OFDM Waterfilling algorithm Algorithm Analysis An information-theoretic approach to MMSE-DFE Relationship of mutual information to MMSE-DFE Consequences of CDEF result Problems V Appendix 3 A Mathematical Preliminaries 33 A. The Q function A. Fourier Transform A.. Definition A.. Properties of the Fourier Transform A..3 Basic Properties of the sinc Function A.3 Z-Transform A.3. Definition A.3. Basic Properties A.4 Energy and power constraints A.5 Random Processes A.6 Wide sense stationary processes A.7 Gram-Schmidt orthonormalisation A.8 The Sampling Theorem A.9 Nyquist Criterion A.0 Choleski Decomposition A. Problems

7 Part I Review of Signal Processing and Detection 7

8

9 Chapter Overview. Digital data transmission Most of us have used communication devices, either by talking on a telephone, or browsing the internet on a computer. This course is about the mechanisms that allows such communications to occur. The focus of this class is on how bits are transmitted through a communication channel. The overall communication system is illustrated in Figure. Figure.: Communication block diagram.. Communication system blocks Communication Channel: A communication channel provides a way to communicate at large distances. But there are external signals or noise that effects transmission. Also channel might behave differently to different input signals. A main focus of the course is to understand signal processing techniques to enable digital transmission over such channels. Examples of such communication channels include: telephone lines, cable TV lines, cell-phones, satellite networks, etc. In order to study these problems precisely, communication channels are often modelled mathematically as illustrated in Figure.. Source, Source Coder, Applications: The main reason to communicate is to be able to talk, listen to music, watch a video, look at content over the internet, etc. For each of these cases the signal 9

10 0 CHAPTER. OVERVIEW Figure.: Models for communication channels. respectively voice, music, video, graphics has to be converted into a stream of bits. Such a device is called a quantizer and a simple scalar quantizer is illustrated in Figure.3. There exists many quantization methods which convert and compress the original signal into bits. You might have come across methods like PCM, vector quantization, etc. Channel coder: A channel coding scheme adds redundancy to protect against errors introduced by the noisy channel. For example a binary symmetric channel (illustrated in Figure.4) flips bits randomly and an error correcting code attempts to communicate reliably despite them LEVELS 8 bits 3 LEVELS 0 SOURCE Figure.3: Source coder or quantizer. Signal transmission: Converts bits into signals suitable for communication channel which is typically analog. Thus message sets are converted into waveforms to be sent over the communication channel.

11 .. COMMUNICATION SYSTEM BLOCKS P 0 e 0 P e BSC P e P e Figure.4: Binary symmetric channel. This is called modulation or signal transmission. One of the main focuses of the class. Signal detection: Based on noisy received signal, receiver decides which message was sent. This procedure called signal detection depends on the signal transmission methods as well as the communication channel. Optimum detector minimizes the probability of an erroneous receiver decision. Many signal detection techniques are discussed as a part of the main theme of the class. Remote Local to Base Local To Mobile Co-channel mobile Base Station Local To Base Remote Figure.5: Multiuser wireless environment. Multiuser networks: Multiuser networks arise when many users share the same communication channel. This naturally occurs in wireless networks as shown in Figure.5. There are many different forms of multiuser networks as shown in Figures.6,.7 and.8.

12 CHAPTER. OVERVIEW Figure.6: Multiple Access Channel (MAC)..3 Goals of this class Figure.7: Broadcast Channel (BC). Understand basic techniques of signal transmission and detection. Communication over frequency selective or inter-symbol interference (ISI) channels. Reduced complexity (sub-optimal) detection for ISI channels and their performances. Multiuser networks. Wireless communication - rudimentary exposition. Connection to information theory. Complementary classes Source coding/quantization (ref.: Gersho & Gray, Jayant & Noll) Channel coding (Modern Coding theory, Urbanke & Richardson, Error correcting codes, Blahut) Information theory (Cover & Thomas) Figure.8: Adhoc network.

13 .4. CLASS ORGANIZATION 3.4 Class organization These are the topics covered in the class. Digital communication & transmission Signal transmission and modulation Hypothesis testing & signal detection Inter-symbol interference channel - transmission & detection Wireless channel models: fading channel Detection for fading channels and the tool of diversity Multiuser communication - TDMA, CDMA Multiuser detection Connection to information theory.5 Lessons from class These are the skills that you should know at the end of the class. Basic understanding of optimal detection Ability to design transmission & detection schemes in inter-symbol interference channels Rudimentary understanding of wireless channels Understanding wireless receivers and notion of diversity Ability to design multiuser detectors Connect the communication blocks together with information theory

14 4 CHAPTER. OVERVIEW

15 Chapter Signals and Detection. Data Modulation and Demodulation MESSAGE SOURCE {m } i {x } VECTOR i ENCODER MODULATOR CHANNEL MESSAGE SINK ^ {m } i VECTOR DETECTOR DEMODULATOR Figure.: Block model for the modulation and demodulation procedures. In data modulation we convert information bits into waveforms or signals that are suitable for transmission over a communication channel. The detection problem is reversing the modulation, i.e., finding which bits were transmitted over the noisy channel. Example... (see Figure.) Binary phase shift keying. Since DC does not go through channel, this implies that 0V, and V, mapping for binary bits will not work. Use: x 0 (t) = cos(π50t), x (t) = cos(π50t). Detection: Detect + or - at the output. Caveat: This is for single transmission. For successive transmissions, stay tuned! 5

16 6 CHAPTER. SIGNALS AND DETECTION 00 Frequency 00 Figure.: The channel in example... Mapping of vectors to waveforms Consider set of real-valued functions {f(t)}, t [0, T ] such that T 0 f (t)dt < This is called a Hilbert space of continuous functions, i.e., L [0, T ]. Inner product < f, g > = T 0 f(t)g(t)dt. Basis functions: A class of functions can be expressed in terms of basis functions {φ n (t)} as x(t) = N x n φ n (t), (.) n= where < φ n, φ m >= δ n m. The waveform carries the information through the communication channel. x Relationship in (.) implies a mapping x =. x N to x(t). Definition... Signal Constellation The set of M vectors {x i }, i = 0,..., M is called the signal constellation. Binary Antipodal Quadrature Phase Shift Keying Figure.3: Example of signal constellations. The mapping in (.) enables mapping of points in L [0, T ] with properties in RI N. If x (t) and x (t) are waveforms and their corresponding basis representation are x and x respectively, then, < x, x >=< x, x >

17 .. DATA MODULATION AND DEMODULATION 7 where the left side of the equation is < x, x >= T 0 x (t)x (t)dt and the right side is < x, x >= N i= x (i)x (i). Examples of signal constellations: Binary antipodal, QPSK (Quadrature Phase Shift Keying). Vector Mapper: Mapping of binary vector into one of the signal points. Mapping is not arbitrary, clever choices lead to better performance over noisy channels. In some channels it is suitable to label points that are close in Euclidean distance to map to being close in Hamming distance. Examples of two alternate labelling schemes are illustrated in Figure Figure.4: A vector mapper. Modulator: Implements the basis expansion of (.). φ (t) x φ N (t) x(t) x N Figure.5: Modulator implementing the basis expansion. Signal Set: Set of modulated waveforms {x i (t)}, i = 0,..., M corresponding to the signal constellation x i = x i,. x i,n R N. Definition... Average Energy: E x = E[ x ] = where px(i) is the probability of choosing x i. M i=0 x i px(i)

18 8 CHAPTER. SIGNALS AND DETECTION The probability p x (i) depends on, Underlying probability distribution of bits in message source. The vector mapper. Definition..3. Average power: P x = Ex T (energy per unit time) Example... Consider a 6 QAM constellation with basis functions: Figure.6: 6 QAM constellation. φ (t) = πt cos T T, φ πt (t) = sin T T For T = 400Hz, we get a rate of log(6) 400 = 9.6kb/s. Gram-Schmidt procedure allows choice of minimal basis to represent {x i (t)} signal sets. More on this during the review/exercise sessions... Demodulation The demodulation takes the continuous time waveforms and extracts the discrete version. Given the basis expansion of (.), the demodulation extracts the coefficients of the expansion by projecting the signal onto its basis as shown below. = T 0 x(t) = x(t)φ n (t)dt = N T = x k φ k (t)φ n (t)dt = k= 0 N x k φ k (t) (.) k= T N 0 k= x k φ k (t)φ n (t)dt N x k δ k n = x n k= Therefore in the noiseless case, demodulation is just recovering the coefficients of the basis functions. Definition..4. Matched Filter: The matched filter operation is equivalent to the recovery of the coefficients of the basis expansion since we can write as an equation: T 0 x(t)φ n(t)dt == x(t) φ n (T t) t=t = x(t) φ n ( t) t=0. Therefore, the basis coefficients recovery can be interpreted as a filtering operation.

19 .. DATA DETECTION 9 ϕ (T t) x y(t) ϕ N (T t) x N Figure.7: Matched filter demodulator. xn ϕ n (t) {b 0, b,..., b RT } Message Vector Map x Modulator Channel ˆx Demodulator Figure.8: Modulation and demodulation set-up as discussed up to know.. Data detection We assume that the demodulator captures the essential information about x from y(t). This notion of essential information will be explored in more depth later. In discrete domain: P Y (y) = M i=0 p Y X (y i)p X (i) This is illustrated in Figure.9 showing the equivalent discrete channel. Example... Consider the Additive White Gaussian Noise Channel (AWGN). Here y = x + z, and P Y X m VECTOR MAPPER x CHANNEL y Figure.9: Equivalent discrete channel.

20 0 CHAPTER. SIGNALS AND DETECTION hence p Y X (y x) = p Z (y x) = πσ e (y x) σ... Criteria for detection Detection is guessing input x given the noisy output y. This is expressed as a function ˆm = H(y). If M = m was the message sent, then Probability of error = P e def = Prob( ˆm m). Definition... Optimum detector: Minimizes error probability over all detectors. The probability of observing Y=y if the message m i was sent is, p(y = y M = m i ) = p Y X (y i) Decision Rule: H : Y M is a function which takes input y and outputs a guess on the transmitted message. Now, P(H(Y) is correct) = P[H(y) is correct Y = y]p Y (y)dy (.3) y Now H(y) is a deterministic function of y. Suppose m was transmitted. For the given y, H( ) chooses a particular hypothesis, say m i, deterministically. Now, what is the probability that i was the transmitted message? This is the probability that y resulted through m i being transmitted. Therefore we can write Inserting this into (.3), we get Implication: The decision rule P[H(Y) is correct Y = y] = P[ ˆm = H(y) = m i Y = y] P(H(Y) is correct) = y y = P[x = x(m i ) = x i Y = y] = P X Y [x = x i y] P X Y [X = x i y]p Y (y)dy {max P X Y [X = x i y]}p Y (y)dy i = P(H MAP (Y) is correct) H MAP (y) = arg max P X Y [X = x i y] i maximizes probability of being correct, i.e., minimizes error probability. Therefore, this is the optimal decision rule. This is called the Maximum-a-posteriori (MAP) decision rule. Notes: MAP detector needs knowledge of the priors p X (x). It can be simplified as follows: p X Y (x i y) = p Y X [y x i]p X (x i ) p Y (y) p Y X [y x i ]p X (x i ) since p Y (y) is common to all hypotheses. Therefore the MAP decision rule is equivalently written as: H MAP (y) = arg max p i Y X [y x i ]p X (x i )

21 .. DATA DETECTION An alternate proof for MAP decoding rule (binary hypothesis) Let Γ 0 be the region such that y Γ 0, H(y) = x 0 and similarly Γ is the region associated to x. For π 0 = P X (x 0 ) and π = P X (x ) P[error] = P[H(y)is wrong] = π 0 P[y Γ H 0 ] + π P[y Γ 0 H ] (.4) = π 0 P Y X (y x 0 )dy + π P Y X (y x )dy Γ Γ [ 0 ] = π 0 P Y X (y x 0 )dy + π P Y X (y x )dy Γ Γ [ = π + π0 P Y X (y x 0 ) π P Y X (y x ) ] dy Γ [ = π + {y Γ} π0 P Y X (y x 0 ) π P Y X (y x ) ] dy R N }{{} to make this term the smallest, collect all the negative area Therefore, in order to make the error probability smallest, we choose on y Γ if π 0 P Y X (y x 0 ) π P Y X (y x ) y Figure.0: Functional dependence of integrand in (.4). π 0 P Y X (y x 0 ) < π P Y X (y x ) That is, Γ is defined as, or y Γ, if, P X (x 0 )P Y X (y x 0 ) P Y (y) < P X(x )P Y X (y x ) P Y (y) P X Y (x 0 y) < P X Y (x y) i.e., the MAP rule! Maximum Likelihood detector: rule becomes, If the priors are assumed uniform, i.e., p X (x i ) = M H ML (y) = arg max p Y X [y x i ] i then the MAP

22 CHAPTER. SIGNALS AND DETECTION which is called the Maximum-Likelihood rule. This because it chooses the message that most likely caused the observation (ignoring how likely the message itself was). This decision rule is clearly inferior to MAP for non-uniform priors. Question: Suppose the prior probabilities were unknown, is there a robust detection scheme? One can think of this as a game where nature chooses the prior distribution and the detection rule is under our control. Theorem... The ML detector minimizes the maximum possible average error probability when the input distribution is unknown and if the conditional probability of error p[h ML (y) is incorrect M = m i ] is independent of i. Proof: Assume that P e,ml m=mi is independent of i. Let P e,ml m=mi = Pe ML (i) def = P ML Hence P e,ml (P x ) = M i=0 P X (i)p e,ml m=mi = P ML (.5) Therefore For any hypothesis test H, max P X P e,ml = max P X M i=0 P X (i)p e,ml m=mi = P ML M maxp e,h = max P X (i)p e,h m=mi P X P X (a) (b) M i=0 M i=0 i=0 M P e,h m=m i M P e,ml m=m i = P e,ml where (a) is because a particular choice of P X can only be smaller than the maximum. And (b) is because the ML decoder is optimal for the uniform prior. Thus, max P X P e,h P e,ml = P ML, since due to (.5) P e,ml = P ML, P x. Interpretation: ML decoding is not just a simplification of the MAP rule, but also has some canonical robustness properties for detection under uncertainty of priors, if the regularity condition of theorem.. is satisfied. We will explore this further in Section... Example... The AWGN channel: Let us assume the following, y = x i + z,

23 .. DATA DETECTION 3 where Hence giving z N (0, σ I), MAP decision rule for AWGN channel e y x i p Y X [y x i ] = (πσ ) N σ p X Y [X = x i y] = p Y X [y xi]p X (xi) p Y (y) Therefore the MAP decision rule is: H MAP (y) = arg max i ML decision rule for AWGN channels p Z (z) = (πσ ) N x, y, z R N e z σ p Y X (y x) = p Z (y x) { } p X Y [X = x i y] { } = arg max p i X (x i ) e y x i (πσ ) N σ = arg max {log[p i X (x i )] y x i } σ { y xi } = arg min i σ log[p X (x i )] { } = arg max p i Y X [y x i ]p X (x i ) { } H ML (y) = arg max p Y X [y X = x i ] i { } = arg max e y x i i (πσ ) N σ = arg max { y x i } i σ { y xi } = arg min i σ Interpretation: The maximum likelihood decision rule selects the message that is closest in Euclidean distance to received signal. Observation: In both MAP and ML decision rules, one does not need y, but just the functions, y x i, i 0,..., M in order to evaluate the decision rule. Therefore, there is no loss of information if we retain scalars, { y x i } instead of y. In this case, it is moot, but in continuous detection, this reduction is important. Such a function that retains the essential information about the parameter of interest is called a sufficient statistic... Minmax decoding rule The MAP decoding rule needs the knowledge of the prior distribution {P X (x = x i )}. If the prior is unknown we develop a criterion which is robust to the prior distribution. Consider the criterion max min P e,h(p x ) P X H

24 4 CHAPTER. SIGNALS AND DETECTION where P e,h (p x ) is the error probability of decision rule H, i.e., P[H(y) is incorrect] P e,h (p X ) explicitly depends on P X (x) For the binary case, P e,h (p X ) = π 0 P[y Γ x 0 ] +π P[y Γ 0 x ] }{{}}{{} does not depend on π,π 0 does not depend on π,π 0 = π 0 P Y X (y x 0 )dy + ( π 0 ) P Y X (y x )dy Γ Γ 0 Thus for a given decision rule H which does not depend on p x, P e,h (p X ) is a linear function of P X (x). P e,h (π 0 ) P e,h (π 0 ) = π 0 P[y Γ H 0 ] + ( π 0 )P[y Γ 0 H ] P[y Γ H 0 ] P[y Γ o H ] π 0 A robust detection criterion is when we want to Clearly for a given decision rule H, Figure.: P e,ml (π 0 ) as a function of the prior π 0. min max P e,h (π 0 ). H π 0 max π 0 P e,h (π 0 ) = max{p[y Γ H 0 ], P[y Γ 0 H ]} Now let us look at the MAP rule for every choice of π 0. Let V (π 0 ) = Pe MAP (π 0 ) i.e., the error probability of the MAP decoding rule as a function of P X (x) (or π 0 ). Since the MAP decoding rule does depend on P X (x), the error probability is no longer a linear function and is actually concave (see Figure., and HW problem). Such a concave function has a unique maximum value and if it is strictly concave has a unique maximizer π0. This value V (π0) is the largest average error probability for the MAP detector and π0 is the worst prior for the MAP detector. Now, for any decision rule that does not depend on P X (x), P e,h (p x ) is a linear function of π 0 (for the binary case) and this is illustrated in Figure.. Since P e,h (p x ) P e,map (p x ) for each p x. The line

25 .. DATA DETECTION 5 π 0 worst prior P e,map (π 0) 0 π 0 π 0 Figure.: The average error probability P e,ml (π 0 ) of the MAP rule as a function of the prior π 0. P e,h (π 0 ) Minmax rule π 0 π 0 π 0 Figure.3: P e.h (π 0 ) and P e,map (π 0 ) as a function of prior π 0. always lies above the curve V (π 0 ). The best we could do is to make it tangential to V (π 0 ) for some π 0, as shown in Figure.3. This means that such a decision rule is the MAP decoding rule designed for prior π 0. If we want the maxp e,h (p x ) to be the smallest it is clear that we want π 0 = π0, i.e., design P X the robust detection rule as the MAP rule for π0. Since π 0 is the worst prior for the MAP rule, this is the best one could hope for. Since the tangent to V (π 0 ) at π0 has slope 0, such a detection rule has the property that P e,h (π 0 ) is independent of π 0. Therefore, for the minmax rule H, we would have Therefore for the minmax rule, P H [y Γ H 0 ] = P H [y Γ 0 H ] P e,h (π 0 ) = π 0 P H [y Γ H 0 ] + ( π 0 )P H [y Γ 0 H ] = P H [y Γ H 0 ] = P H [y Γ 0 H ] is independent of π 0. Hence P e,h x=x0 = P e,h x=x, i.e., the error probability conditioned on the message are the same. Note that this was the regularity condition we used in Theorem... Hence regardless of the choice of π 0, the error probability (average) is the same! If π0 = (i.e., p x is uniform), then the maximum likelihood rule

26 6 CHAPTER. SIGNALS AND DETECTION is the robust detection rule as stated in Theorem... Note that this is not so if π0, then the MAP rule for π0 becomes the robust detection rule. Also note that the minmax rule makes the performance of all priors as bad as the worst prior. Note: If π0 =, or P X (x) is uniform then minmax is the same as ML, and this occurs in several cases. Prob. of error V (π 0 ) = Error prob. of Bayes rule π 0 Figure.4: P e,h (π 0 ) Minmax detection rule. Since minmax rule becomes Bayes rule for the worst prior, if the worst prior is uniform then clearly the minmax rule is the ML rule. Clearly if ML satisfies P ML [error H j ] independent of j then the ML rule is the robust detection rule...3 Decision regions Given the MAP and ML decision rules, we can divide R N into regions which correspond to different decisions. For example, in the AWGN case, the ML decoding rule will always decide to choose m i if y x i < y x j, j i Therefore, we can think of the region Γ i as the decision region for m i where, Γ ML i = {y R N : y x i < y x j, j i} The MAP rule for the AWGN channel is a shifted region: Γ MAP i = {y R N : y x i σ log[p X (x i )] < y x j σ log[p X (x j )], j i} The ML decision regions have a nice geometric interpretation. They are the Voronoi regions of the set of points {x i }. That is, the decision region associated with m i is the set of all points in R N which are closer to x i than all the rest. Moreover, since they are defined by Euclidean norms y x i, the regions are separated by hyper planes. To see this observe the decision regions are: y x i y x j, j i < y, x i > + x i < y, x j > + x j < y, x j x i > ( x j x i ) < y (x j + x i ), x i x j > 0 j i Hence the decision regions are bounded by hyperplanes since they are determined by a set of linear inequalities. The MAP decoding rule still produces decision regions that are hyper planes.

27 .. DATA DETECTION 7 Figure.5: Voronoi regions for {x i }, for uniform prior. Hence here the ML and MAP decision regions coincide...4 Bayes rule for minimizing risk Error probability is just one possible criterion for choosing a detector. More generally the detectors minimize other cost functions. For example, let C i,j denote the cost of choosing hypothesis i when actually hypothesis j was true. Then the expected cost incurred by some decision rule H(y) is: R j (H) = i C i,j P[H(Y) = m i M = m j ] Therefore the overall average cost after taking prior probabilities into account is: R(H) = j P X (j)r j (H) Armed with this criterion we can ask the same question: Question: What is the optimal decision rule to minimize the above equation? Note: The error probability criterion corresponds to a cost assignment: C i,j =, i j, C i,j = 0, i = j. Consider case M =, i.e., distinguishing between hypotheses. Rewriting the equation for this case: where, R(H) = P x (0)R 0 (H) + P X ()R (H) R j (H) = C 0,j P[H(Y) = m 0 M = m j ] + C,j P[H(Y) = m M = m j ], j = 0, = C 0,j { P[H(Y) = m M = m j ]} + C,j P[H(Y) = m M = m j ], j = 0, Let P X (0) = π 0, P X () = π 0

28 8 CHAPTER. SIGNALS AND DETECTION R(H) = π 0 C 0,0 P[y Γ 0 x = x 0 ] + π 0 C,0 P[y Γ x = x 0 ] + π C 0, P[y Γ 0 x = x ] + π C, P[y Γ x = x ] = π 0 C 0,0 π 0 C 0,0 P[y Γ x = x 0 ] + π 0 C,0 P[y Γ x = x 0 ] + π C 0, π C 0, P[y Γ x = x ] + π C, P[y Γ x = x ] = π 0 C 0,0 + π C 0, + π 0 (C,0 C 0,0 ) P Y X (y x = x 0 )dy y Γ + π (C, C 0, ) P Y X (y x = x )dy y Γ = π j C 0,j + π j (C,j C 0,j )P Y X (y x = x j ) dy y Γ j=0 j=0 Now, just like in the alternate proof for the MAP decoding rule, (see (.4)) we want to minimize the last term. As seen in Figure.0 this is done by collecting the negative area in the function j=0 π j(c,j C 0,j )P Y X (y x = x j ) as a function of y. Therefore we get the decision rule, Γ = {y R N : P X (j)(c,j C 0,j )P Y X (y x j ) < 0} j=0 Likelihood ratio: Surprisingly, in all the detection criteria we have seen the likelihood ratio defined as, seems to appear as a part of the decision rule. For example, if C, < C 0,, then we have, P Y X (y x 0 ) P Y X (y x ) Γ = {y R N : P Y X (y x ) > τp Y X (y x 0 )} where τ = PX(0)(C,0 C0,0) P X()(C 0, C,) For C 0,0 = C, = 0 and C 0, = C,0 =, we get the MAP rule, i.e., τ = PX(0) error probability. P X() which minimizes average..5 Irrelevance and reversibility An output may contain parts that do not help to determine the message. These irrelevant components can be discarded without loss of performance. This is illustrated in the following example. Example..3. As shown Figure.6 if z and z are independent then clearly y is irrelevant. Theorem... If y = [ y y ], and we have either of the following equivalent conditions: P X Y,Y = P X Y P Y Y,X = P Y Y then y is irrelevant for the detection of X.

29 .. DATA DETECTION 9 Z Z X + + Y Y Figure.6: Example..3. Proof: If P X Y,Y = P X Y, then clearly the MAP decoding rule ignores Y, and therefore it is irrelevant almost by definition. The question is whether the second statement is equivalent. Let P Y Y,X = P Y Y P Y Y = P Y,Y P Y (.6) P Y Y,X = P Y,Y X P Y X (.7) Hence, P Y Y,X = P Y Y P Y,Y P Y = P Y,Y X P Y X P Y,Y X P Y,Y = P Y X P Y (.8) P Y,Y X P X P Y,Y = P Y X P X P Y P X Y,Y = P X Y Note: The irrelevance theorem is summarized by a Markov chain relationship X Y Y which means that conditioned on Y, Y is independent of X. Application of Irrelevance theorem Theorem..3. (Reversibility theorem) The application of an invertible mapping on the channel output vector y, does not affect the performance of the MAP detector. Proof: Let y be the channel output, and y = G(y ), where G( ) is an invertible map. Then y = G (y ). Clearly [ ] [ ] y y = y G (y )

30 30 CHAPTER. SIGNALS AND DETECTION and therefore, P X Y,Y = P X Y and hence by applying the irrelevance theorem, we can drop y...6 Complex Gaussian Noise Let z be Real Gaussian noise i.e., Z = (z... z n ), and P z (z) = (πσ ) z e σ N/ Let Complex Gaussian random variable be Z c = R + ji. R, I are real and imaginary components, (R, I) is jointly Gaussian. [ ] E[R K = ] E[RI] E[IR] E[I ] R z (c) = E[Z c Z c ] = E[ Z c ] = E[R] + E[I] E[Z c Z c ] = E[R ] + j E[I ] + je[ri] = E[R ] E[I ] + je[ri] Circularly symetric Gaussian random variable: For complex Gaussian random vectors: E[Z (C) Z (C) ] = 0 E[R ] = E[I ] E[RI] = 0 E[Z (C) i Z (C) j ] = E[R i R j ] + E[I i I j j]e[r i I j ] je[r j I i ] Circularly symmetric: i j ] = 0 for all i, j. Complex noise processes arise due to passband systems, we will learn more on them shortly. E[Z (C) Z (C)..7 Continuous additive white Gaussian noise channel Let us go through the entire chain for a continuous (waveform) channel. Channel: y(t) = x(t) + z(t), t [0, T] Additive White Gaussian Noise: Noise process z(t) is Gaussian and white i.e., E [z(t)z(t τ)] = N 0 δ(τ) Vector Channel Representation: Let the basis expansion and vector encoder be represented as, x(t) = N n=0 x n φ n (t). Therefore, one can write, y(t) = N n=0 x n φ n (t) + z(t)

31 .. DATA DETECTION 3 Let Consider vector model, Note: y n =< y(t), φ n (t) >, z n =< z(t), φ n (t) >, n = 0,..., N ẑ(t) def = N n=0 y = y 0. y N = x + z z n φ n (t) z(t) = ŷ(t) def = N n=0 y n φ n (t) y(t) Lemma... (Uncorrelated noise samples) Given any orthonormal basis functions {φ n (t)}, and white Gaussian noise z(t). The coefficients {z n } =< z, φ n > of the basis expansion are Gaussian and independent and identically distributed, with variance N0, i.e. E[z nz k ] = N0 δ n k. Therefore, if we extend the orthonormal basis {φ n (t)} n=0 N to span {z(t)}, the coefficients of the expansion {z n } n=0 N would be independent of the rest of the coefficients. Let us examine, N y(t) = ŷ(t) + ỹ(t) = (x n + z n )φ n (t) n=0 } {{ } ŷ(t) + z(t) ẑ(t) }{{} ỹ(t) Therefore, in vector expansion, ỹ is the vector containing basis coefficients from φ n (t), n = N,.... These coefficients can be shown to be irrelevant to the detection of x, and can therefore be dropped. Hence for the detection process the following vector model is sufficient. Now we are back in familiar territory. Therefore the MAP decision rule is: H MAP (y) = arg min i y = x + z We can write the MAP and ML decoding rules as before. [ y xi ] σ log[p X (x i )] And the ML decision rule is: H ML (y) = arg min i [ y xi ] σ Let p x (x i ) = M i.e. uniform prior. Here ML MAP optimal detector. P e = M i=0 M P e x=xi P x (x i ) = i=0 uniform prior P e,ml = M P e,ml x=xi = M M i=0 P c x=xi P x (x i ) M The error probabilities depend on chosen signal constellation. More soon... i=0 P c x=xi

32 3 CHAPTER. SIGNALS AND DETECTION..8 Binary constellation error probability Hence, conditional error probability is: Y = X i + Z, i = 0,, Z N (0, σ I N ) P e,ml x=x 0 = P[ y x 0 y x ], since y = x 0 + z, (.9) P e,ml x=x 0 = P[ z (x 0 x ) + z ] = P[ z (x x 0 ) z ] = P[ z x x 0 + z < (x x 0 ), z >] (.0) = P[ < (x x 0 ), z > x x 0 ] x x 0 But < (x x 0 ), z > is a Gaussian i.e. U = (x x0)t x x 0 Z is Gaussian, with E[U] = 0, E[ U ] = σ P e,ml x=x 0 = = def = Q x x 0 x x 0 σ (πσ ) (π) ( ) x x 0 σ e σ U du (.) e eu dũ (.) ( ) x x 0 P e,ml = Px{x 0 }P{e, ML x = x 0 } + Px{x }P{e, ML x = x } = Q σ.3 Error Probability for AWGN Channels.3. Discrete detection rules for AWGN AWGN Channel: Y = X + Z, Y C N, x C N, Z C. Z (.3) X + Y Let p x (x i ) = M i.e. uniform prior, hence the ML is equivalent to MAP detector. Detection Rule: Γ i = {y C N : y x i y x j, j i} P e = M i=0 M P e x=xi P x (x i ) = i=0 uniform prior Pe,ML = M P e,ml x=xi = M M i=0 P c x=xi P x (x i ) M i=0 P e x=xi Hence, for M >, the error probability calculation could be difficult. We will develop properties and bounds that might help in this problem.

33 .3. ERROR PROBABILITY FOR AWGN CHANNELS Rotational and transitional invariance Rotational Invariance Theorem.3.. If all the data symbols are rotated by an orthogonal transformation, i.e. Xi = Qx i, i {0,..., M }, where Q C N N, Q Q = I, then the probability of error of the MAP/ML receiver remains unchanged over an AWGN channel. Proof: Let Q Ỹ }{{} Y Ỹ = X + Z (.4) = Q X + Q Z }{{}}{{} X ez Y = X + Z (.5) but Z is Gaussian (linear transformation of Gaussian Z) and E[ Z Z ] = Q σ IQ = σ I Z is probabilistically equivalent to Z N (0, σ I). Hence (.4) is the same as Y = X + Z since Q is an invertible transform Probability of error is unchanged. Translational Invariance If all data symbols in a signal constellation are translated by constant vector amount, i.e X i = X i + a, i then the probability of error of the ML decoder remains the same on an AWGN channel. Minimum energy translate: Substract E[X] from every signal point. In other words, among equivalent signal constellations, a zero mean signal constellation has minimum energy..3.3 Bounds for M > As mentioned earlier, the error probability calculations for M > can be difficult. Hence in this section we develop upper bounds for the error probability which is applicable for any constellation size M. Theorem.3.. Union bound P e,ml x=xi j i P (x i, x j ) = ( ) xi x j Q σ j i P e,ml (M )Q( dmin σ ) where d def min = min i j x i x j

34 34 CHAPTER. SIGNALS AND DETECTION Proof: For x = x i, i.e. y = x i + z P e,ml x=xi = P j i{ y x i > y x j } UB i j P [ y x i > y x j ] = i j P (x i, x j ) ( ) xi x j Q σ i j ( ) dmin (M )Q σ since Q(.) is a monotonously decreasing function. Therefore P e,ml = M i=0 M i=0 = (M )Q P x (x i )P (e, ML x = x i ) P x (x i )(M )Q ( dmin σ ) ( dmin σ ) Tighter Bound (Nearest Neighbor Union Bound) Let N i be the number of points sharing a decision boundary D i with x i. Suppose x k does not share a If y / Γ i, an error occurs Γ i P r[ j i { y x j < y x i }] = P r[y / Γ i ] Figure.7: Decision regions for AWGN channel and error probability. decision boundary with x i, but y x i > y x k then x j D i s.t. y x i > y x j where D i is a set of points sharing the same decision boundary. Hence

35 .3. ERROR PROBABILITY FOR AWGN CHANNELS 35 y Γ k xk D x j x i D Figure.8: Figure illustrating geometry when x k D i, then there is a x j D i such that y is closer to x i. y x k < y x i j D i such that y x j < y x i P[ j i{ y x j < y x i ]} = P[ P e,ml = { y x j < y x i ]} j D i ( ) dmin N i Q σ M i=0 M i=0 P e,ml N e Q Hence we have proved the following result, P x (x i )P (e, ML x = x i ) P x (x i )Q ( dmin σ ( ) dmin N i σ ) where N e = i Theorem.3.3. Nearest Neighbor Union bound (NNUB) P e,ml N e Q( d min σ ) N i P x (x i ) where N e = N i P x (x i ) and N i is the number of constellation points sharing a decision boundary with x i.

36 36 CHAPTER. SIGNALS AND DETECTION.4 Signal sets and measures.4. Basic terminology In this section we discuss the terminology used i.e., the rate, number of dimensions etc. and discuss what would be fair comparisons between constellations. If signal bandwidth is approximately W and is approximately time-limited to T, then a deep theorem from signal analysis states that the space has dimension N which is N = W T If b bits are in a constellation in dimension N. b = b N = # of bits/dimension R = rate = b = # bits/unit time T R W = b = # bits/sec/hz Ē x = Average energy per dimension = E x N P x = Average power = E x T Ē x useful in compound signal sets with different # of dimension. Signal to noise ratio (SNR) SNR = E x σ = Energy/dim Noise energy/dim Constellation figure of merit (CFM) ζ x def = (d min/) E x As ζ x increases we get better performance (for same # of bits per dimension only). Fair comparison: In order to make a fair comparison between constellations, we need to make a multiparameter comparison across the following measures. Data rate (R) bits/dim ( b) Power (P x ) Energy/dim (Ēx) Total BW (W ) OR Normalized probability of error ( P e ) Symbol period (T ) Error probability (P e )

37 .4. SIGNAL SETS AND MEASURES Signal constellations Cubic constellations x = N i=0 U i e i where N is the number of dimensions, and e i R N is, { if k = i e i (k) = 0 else where U i {0, } depending on bit sequence. Hence the number of constellation points is, M = N. Orthogonal constellations M = αn. Example: Bi-orthogonal signal set M = N and x i = ±e i N signal points. Circular constellations M th root of unity 8 Example.4.. Quadrature Phase-Shift Keying (QPSK): φ (t) = φ (t) = T cos(πt T ) 0 t T, T sin(πt T ) 0 t T

38 38 CHAPTER. SIGNALS AND DETECTION ( ) x E The constellation consists of x =, where x x i { x E, x } Note that, d min = 4E x for BPSK. d min = E x, ζ x = [ ε x ] ε x =. Error Probability: P correct = 3 P correct i P x (i) = P correct 0 i=0 = [ Q( d min σ )] P error = Q( d min σ ) [Q(d min σ )] < Q( d min σ ) NNUB Where Q( dmin σ ) is the NNUB. Hence for d min reasonably large the NNUB is tight. Example.4.. M-ary Phase-Shift Keying (MPSK) π/m π/m d min Figure.9: Figure for M-ary Phase-Shift keying. Ex sin( Error Probability: P e < Q( π M ) σ ).4.3 Lattice-based constellation: d min = E x sin( π M ), ζ x = [ E x sin( π M )] ε x = sin π M A lattice is a regular arrangement of points in an N-dimensional space. where G R N N is called the generator matrix. x = Ga, a i in Z

39 .5. PROBLEMS 39 Example.4.3. Integer lattice: G=I x Z N If N= we get the Pulse Amplitude Modulation (PAM) constellation. For this, E x = d (M ). Thus, d min = E x M, ζ x = 3E x M Error Probability: 3d/ d/ 0 d/ 3d/ Figure.0: PAM constellation. P correct = M M [ Q(d min σ )] + M [ Q(d min σ )] P e = ( M )Q(d min σ ) Number of nearest neighbors: N j = for interior points, and N j = for end points. Note: Hence NNUB is exact. Curious fact: For a given minimum distance d, N e = M M + M = ( M ) M = + E x d b = log M = log( + E x d ) Is this familiar? If so, is this a coincidence? More about this later... Other lattice based constellations Quadrature Amplitude Modulation (QAM): Cookie-slice of -dimensional integer lattice. Other constellations are carved out of other lattices (e.g. hexagonal lattice). Other performance measures of interest Coding gain: γ = ζ ζ Shaping gain of lattice. Peak-to-average ratio..5 Problems Problem. Consider a Gaussian hypothesis testing problem with m =. Under hypothesis H = 0 the transmitted point is equally likely to be a 00 = (, ) or a 0 = (, ), whereas under hypothesis H = the transmitted point is equally likely to be a 0 = (, ) or a = (, ). Under the assumption of uniform priors, write down the formula for the MAP decision rule and determine geometrically the decision regions.

40 40 CHAPTER. SIGNALS AND DETECTION Problem. [ Minimax ] Consider a scalar channel Y = X + Z (.6) where X = ± (i.e. X {, }) and Z N (0, ) (and Z is a real Gaussian random variable).. Let P[X = ] = = P[X = ], find the MAP decoding rule. Note that this is also the ML decoding rule. Now, let P[X = ] = Π 0 and P[X = ] = Π 0. Now, compute the error probability associated with the ML decoding rule as a function of Π 0. Given this calculation can you guess the worst prior for the MAP decoding rule? (Hint: You do not need to calculate P e,map (Π 0 ) for this). Now, consider another receiver D R, which implements the following decoding rule (for the same channel as in (.6)). D R, = [, ), D R, = (, ) That is, the receiver decides that was transmitted if it receives Y [, ) and decides that - was transmitted if Y (, ). Find P e,r (Π 0 ), the error probability of this receiver as a function of Π 0 = P[X = ]. Plot P e,r (Π 0 ) as a function of Π 0. Does it behave as you might have expected? 3. Find max Π0 P e,r (Π 0 ), i.e. what is the worst prior for this receiver? 4. Find out the value Π 0 for which the receiver D R specified in parts () and (3) corresponds to the MAP decision rule. In other words, find for which value of Π 0, D R is optimal in terms of error probability. Problem.3 Consider the binary hypothesis testing problem with MAP decoding. Assume that priors are given by (π 0, π 0 ).. Let V (π 0 ) be average probability of error. Write the expression for V (π 0 ).. Show that V (π 0 ) is a concave function of π 0 i.e. for priors (π 0, π 0 ) and (π 0, π 0). V (λπ 0 + ( λ)π 0) λv (π 0 ) + ( λ)v (π 0), 3. What is the implication of concavity in terms of maximum of V (π 0 ) for π 0 [0, ]? Problem.4 Consider the Gaussian hypothesis testing case with non uniform priors. Prove that in this case, if y and y are elements of the decision region associated to hypothesis i then so is αy + ( α)y, where α [0, ].

41 .5. PROBLEMS 4 Problem.5 Suppose Y is a random variable that under hypothesis H j has density p j (y) = j + e (j+) y, y R, j = 0,. Assume that costs are given by C ij = 0 if i = j, if i = and j = 0, 3/4 if i = 0 and j =.. Find the MAP decision region assuming equal priors.. Recall that average risk function is given by: R H (π 0 ) = π j C 0,j + j=0 π j (C,j C 0,j )P [H(Y) = m M = m j ]. j=0 Assume that costs are given as above. Show that R MAP (π 0 ) is a concave function of π 0. Find the minimum, maximum value of R MAP (π 0 ) and the corresponding priors. Problem.6 Consider the simple hypothesis testing problem for the real-valued observation Y : H 0 : p 0 (y) = exp( y /)/ π, H : p (y) = exp( (y ) /)/ π, y R y R Suppose the cost assignment is given by C 00 = C = 0, C 0 =, and C 0 = N. Find the minmax rule and risk. Investigate the behavior when N is very large. Problem.7 Suppose we have a real observation Y and binary hypotheses described by the following pair of PDFs: p 0 (y) = { ( y ), if y 0, if y > and p (y) = { 4 ( y ), if y 0, if y > Assume that the costs are given by C 0 = C 0 > 0 C 00 = C = 0. Find the minimax test of H 0 versus H and the corresponding minimax risk.

42 4 CHAPTER. SIGNALS AND DETECTION Problem.8 In the following a complex-valued random vector is defined as: U = U R + ju I and we define the covariance matrix of a zero mean complex-valued random vector as : K U = E[UU ] We recall that a complex random vector is proper iff K UR = K UI and K UIU R = K T U IU R. We want to prove that if U is a proper complex n-dimensional Gaussian zero mean random vector with covariance Λ = E[UU ]], then the pdf of U is given by: p U (u) = [ [ ] [ ] U. Compute Φ =Cov R UR ], U I U I π n det(λ) exp{ u Λ u}. A complex Gaussian random vector is defined as a vector with jointly Gaussian real and imaginary parts. Write p URU I (u R, u I ). 3. Show the following lemma: Define the Hermitian [ n ] n matrix M = M R + M I + j(m IR MIR T ) MR M and the symmetric n n matrix Ψ = RI, then the quadratic forms E = u M IR M Mu and I E = [ [ ] ] u T ur R ut I Ψ are equal for all u = u u R + ju I iff M I = M R and M IR = MIR T I 4. Suppose that Λ = (I jλ IR Λ R ) where = Λ R + Λ IR Λ R Λ IR. Apply the lemma given above to Ψ = Φ and M = (I jλ IR Λ R ) in order to show that p U(u) and p URU I (u R, u I ) have the same exponents. Use the matrix inversion formulae. [ ] A B 5. Show that det = det(ad BD C D CD). 6. Using the result above show that n det Φ = det Λ Problem.9 Consider the following signals: x 0 (t) = x (t) = x (t) = { T cos ( πt T + ) π 6 if t [0, T ] { 0 otherwise T cos ( πt T + ) 5π 6 if t [0, T ] { 0 otherwise T cos ( πt T + ) 3π if t [0, T ] 0 otherwise (a) Find a set of orthonormal basis functions for this signal set. Show that they are orthonormal. Hint: Use the identity for cos(a + b) = cos(a) cos(b) sin(a) sin(b).

43 .5. PROBLEMS 43 (b) Find the data symbols corresponding to the signals above using the basis functions you found in (a). (c) Find the following inner products: (i) < x 0 (t), x 0 (t) > (ii) < x 0 (t), x (t) > (iii) < x 0 (t), x (t) > Problem.0 Consider an additive-noise channel y = x + n, where x takes on the values ±3 with P (x = 3) = /3 and where n is a Cauchy random variable with PDF: p n (z) = π( + z ). Determine the decision regions of the MAP detector. Compare the decision regions found with those of the MAP detector for n N (0, ). Compute the error probability in the two cases (Cauchy and Gaussian noise). Problem. Consider the following constellation to be used on an AWGN channel with variance σ : x 0 = (, ) x = (, ) x = (, ) x 3 = (, ) x 4 = (0, 3). Find the decision region for the ML detector.. Find the union bound and nearest neighbor union bound on P e for the ML detector on this signal constellation. Problem. A set of 4 orthogonal basis functions {φ (t), φ (t), φ 3 (t), φ 4 (t)} is used in the following constellation. In both the first dimensions and again in the second two dimensions: The constellation points are restricted such that a point E can only follow a point E and a point O can only follow a point O. The points {, }, {, } are labeled as E and {, }, {, } are labeled as O points. For instance, the 4- dimensional point [,,, ] is permitted to occur, but the point [,,, ] can not occur.. Enumerate all M points as ordered-4-tuples.. Find b, b. 3. Find E x and E x (energy per dimension) for this constellation. 4. Find d min for this constellation. 5. Find P e and P e for this constellation using the NNUB if used on an AWGN with σ = 0..

Modulation & Coding for the Gaussian Channel

Modulation & Coding for the Gaussian Channel Modulation & Coding for the Gaussian Channel Trivandrum School on Communication, Coding & Networking January 27 30, 2017 Lakshmi Prasad Natarajan Dept. of Electrical Engineering Indian Institute of Technology

More information

Digital Modulation 1

Digital Modulation 1 Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27 i Contents I Basic

More information

BASICS OF DETECTION AND ESTIMATION THEORY

BASICS OF DETECTION AND ESTIMATION THEORY BASICS OF DETECTION AND ESTIMATION THEORY 83050E/158 In this chapter we discuss how the transmitted symbols are detected optimally from a noisy received signal (observation). Based on these results, optimal

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

Appendix B Information theory from first principles

Appendix B Information theory from first principles Appendix B Information theory from first principles This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4 is the only part of the book that supposes

More information

Digital Transmission Methods S

Digital Transmission Methods S Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted. Introduction I We have focused on the problem of deciding which of two possible signals has been transmitted. I Binary Signal Sets I We will generalize the design of optimum (MPE) receivers to signal sets

More information

Summary II: Modulation and Demodulation

Summary II: Modulation and Demodulation Summary II: Modulation and Demodulation Instructor : Jun Chen Department of Electrical and Computer Engineering, McMaster University Room: ITB A1, ext. 0163 Email: junchen@mail.ece.mcmaster.ca Website:

More information

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10 Digital Band-pass Modulation PROF. MICHAEL TSAI 211/11/1 Band-pass Signal Representation a t g t General form: 2πf c t + φ t g t = a t cos 2πf c t + φ t Envelope Phase Envelope is always non-negative,

More information

EE4304 C-term 2007: Lecture 17 Supplemental Slides

EE4304 C-term 2007: Lecture 17 Supplemental Slides EE434 C-term 27: Lecture 17 Supplemental Slides D. Richard Brown III Worcester Polytechnic Institute, Department of Electrical and Computer Engineering February 5, 27 Geometric Representation: Optimal

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Lecture 4 Capacity of Wireless Channels

Lecture 4 Capacity of Wireless Channels Lecture 4 Capacity of Wireless Channels I-Hsiang Wang ihwang@ntu.edu.tw 3/0, 014 What we have learned So far: looked at specific schemes and techniques Lecture : point-to-point wireless channel - Diversity:

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

A First Course in Digital Communications

A First Course in Digital Communications A First Course in Digital Communications Ha H. Nguyen and E. Shwedyk February 9 A First Course in Digital Communications 1/46 Introduction There are benefits to be gained when M-ary (M = 4 signaling methods

More information

TSKS01 Digital Communication Lecture 1

TSKS01 Digital Communication Lecture 1 TSKS01 Digital Communication Lecture 1 Introduction, Repetition, and Noise Modeling Emil Björnson Department of Electrical Engineering (ISY) Division of Communication Systems Emil Björnson Course Director

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

Signal Design for Band-Limited Channels

Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Signal Design for Band-Limited Channels Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal

More information

Decision-Point Signal to Noise Ratio (SNR)

Decision-Point Signal to Noise Ratio (SNR) Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless

More information

Problem Set 7 Due March, 22

Problem Set 7 Due March, 22 EE16: Probability and Random Processes SP 07 Problem Set 7 Due March, Lecturer: Jean C. Walrand GSI: Daniel Preda, Assane Gueye Problem 7.1. Let u and v be independent, standard normal random variables

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology

RADIO SYSTEMS ETIN15. Lecture no: Equalization. Ove Edfors, Department of Electrical and Information Technology RADIO SYSTEMS ETIN15 Lecture no: 8 Equalization Ove Edfors, Department of Electrical and Information Technology Ove.Edfors@eit.lth.se Contents Inter-symbol interference Linear equalizers Decision-feedback

More information

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. Digital Modulation and Coding Tutorial-1 1. Consider the signal set shown below in Fig.1 a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics. b) What is the minimum Euclidean

More information

that efficiently utilizes the total available channel bandwidth W.

that efficiently utilizes the total available channel bandwidth W. Signal Design for Band-Limited Channels Wireless Information Transmission System Lab. Institute of Communications Engineering g National Sun Yat-sen University Introduction We consider the problem of signal

More information

Interactions of Information Theory and Estimation in Single- and Multi-user Communications

Interactions of Information Theory and Estimation in Single- and Multi-user Communications Interactions of Information Theory and Estimation in Single- and Multi-user Communications Dongning Guo Department of Electrical Engineering Princeton University March 8, 2004 p 1 Dongning Guo Communications

More information

Electrical Engineering Written PhD Qualifier Exam Spring 2014

Electrical Engineering Written PhD Qualifier Exam Spring 2014 Electrical Engineering Written PhD Qualifier Exam Spring 2014 Friday, February 7 th 2014 Please do not write your name on this page or any other page you submit with your work. Instead use the student

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM ANAYSIS OF A PARTIA DECORREATOR IN A MUTI-CE DS/CDMA SYSTEM Mohammad Saquib ECE Department, SU Baton Rouge, A 70803-590 e-mail: saquib@winlab.rutgers.edu Roy Yates WINAB, Rutgers University Piscataway

More information

Coding theory: Applications

Coding theory: Applications INF 244 a) Textbook: Lin and Costello b) Lectures (Tu+Th 12.15-14) covering roughly Chapters 1,9-12, and 14-18 c) Weekly exercises: For your convenience d) Mandatory problem: Programming project (counts

More information

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1

Lecture 7: Wireless Channels and Diversity Advanced Digital Communications (EQ2410) 1 Wireless : Wireless Advanced Digital Communications (EQ2410) 1 Thursday, Feb. 11, 2016 10:00-12:00, B24 1 Textbook: U. Madhow, Fundamentals of Digital Communications, 2008 1 / 15 Wireless Lecture 1-6 Equalization

More information

List Decoding: Geometrical Aspects and Performance Bounds

List Decoding: Geometrical Aspects and Performance Bounds List Decoding: Geometrical Aspects and Performance Bounds Maja Lončar Department of Information Technology Lund University, Sweden Summer Academy: Progress in Mathematics for Communication Systems Bremen,

More information

Direct-Sequence Spread-Spectrum

Direct-Sequence Spread-Spectrum Chapter 3 Direct-Sequence Spread-Spectrum In this chapter we consider direct-sequence spread-spectrum systems. Unlike frequency-hopping, a direct-sequence signal occupies the entire bandwidth continuously.

More information

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood ECE559:WIRELESS COMMUNICATION TECHNOLOGIES LECTURE 16 AND 17 Digital signaling on frequency selective fading channels 1 OUTLINE Notes Prepared by: Abhishek Sood In section 2 we discuss the receiver design

More information

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH MIMO : MIMO Theoretical Foundations of Wireless Communications 1 Wednesday, May 25, 2016 09:15-12:00, SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 20 Overview MIMO

More information

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes STOCHASTIC PROCESSES, DETECTION AND ESTIMATION 6.432 Course Notes Alan S. Willsky, Gregory W. Wornell, and Jeffrey H. Shapiro Department of Electrical Engineering and Computer Science Massachusetts Institute

More information

Maximum Likelihood Sequence Detection

Maximum Likelihood Sequence Detection 1 The Channel... 1.1 Delay Spread... 1. Channel Model... 1.3 Matched Filter as Receiver Front End... 4 Detection... 5.1 Terms... 5. Maximum Lielihood Detection of a Single Symbol... 6.3 Maximum Lielihood

More information

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver

Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Diversity Performance of a Practical Non-Coherent Detect-and-Forward Receiver Michael R. Souryal and Huiqing You National Institute of Standards and Technology Advanced Network Technologies Division Gaithersburg,

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 08 December 2009 This examination consists of

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise. Data Detection for Controlled ISI *Symbol by symbol suboptimum detection For the duobinary signal pulse h(nt) = 1 for n=0,1 and zero otherwise. The samples at the output of the receiving filter(demodulator)

More information

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1 PCM Reference Chapter 1.1, Communication Systems, Carlson. PCM.1 Pulse-code modulation (PCM) Pulse modulations use discrete time samples of analog signals the transmission is composed of analog information

More information

Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN

Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, 007 Chenggao HAN Contents 1 Introduction 1 1.1 Elements of a Digital Communication System.....................

More information

MITOCW ocw-6-451_ feb k_512kb-mp4

MITOCW ocw-6-451_ feb k_512kb-mp4 MITOCW ocw-6-451_4-261-09feb2005-220k_512kb-mp4 And what we saw was the performance was quite different in the two regimes. In power-limited regime, typically our SNR is much smaller than one, whereas

More information

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations TUTORIAL ON DIGITAL MODULATIONS Part 8a: Error probability A [2011-01-07] 07] Roberto Garello, Politecnico di Torino Free download (for personal use only) at: www.tlc.polito.it/garello 1 Part 8a: Error

More information

Chapter 2 Signal Processing at Receivers: Detection Theory

Chapter 2 Signal Processing at Receivers: Detection Theory Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication

More information

Multi-User Fundamentals

Multi-User Fundamentals Contents 12 Multi-User Fundamentals 401 12.1 Multi-user Channels and Bounds................................ 402 12.1.1 Data Rates and Rate Regions.............................. 406 12.1.2 Optimum multi-user

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

EE 574 Detection and Estimation Theory Lecture Presentation 8

EE 574 Detection and Estimation Theory Lecture Presentation 8 Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy

More information

Multiuser Detection. Summary for EECS Graduate Seminar in Communications. Benjamin Vigoda

Multiuser Detection. Summary for EECS Graduate Seminar in Communications. Benjamin Vigoda Multiuser Detection Summary for 6.975 EECS Graduate Seminar in Communications Benjamin Vigoda The multiuser detection problem applies when we are sending data on the uplink channel from a handset to a

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

Lecture 6 I. CHANNEL CODING. X n (m) P Y X 6- Introduction to Information Theory Lecture 6 Lecturer: Haim Permuter Scribe: Yoav Eisenberg and Yakov Miron I. CHANNEL CODING We consider the following channel coding problem: m = {,2,..,2 nr} Encoder

More information

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference

The Robustness of Dirty Paper Coding and The Binary Dirty Multiple Access Channel with Common Interference The and The Binary Dirty Multiple Access Channel with Common Interference Dept. EE - Systems, Tel Aviv University, Tel Aviv, Israel April 25th, 2010 M.Sc. Presentation The B/G Model Compound CSI Smart

More information

3.9 Diversity Equalization Multiple Received Signals and the RAKE Infinite-length MMSE Equalization Structures

3.9 Diversity Equalization Multiple Received Signals and the RAKE Infinite-length MMSE Equalization Structures Contents 3 Equalization 57 3. Intersymbol Interference and Receivers for Successive Message ransmission........ 59 3.. ransmission of Successive Messages.......................... 59 3.. Bandlimited Channels..................................

More information

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,

More information

Performance Analysis of Spread Spectrum CDMA systems

Performance Analysis of Spread Spectrum CDMA systems 1 Performance Analysis of Spread Spectrum CDMA systems 16:33:546 Wireless Communication Technologies Spring 5 Instructor: Dr. Narayan Mandayam Summary by Liang Xiao lxiao@winlab.rutgers.edu WINLAB, Department

More information

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results Information Theory Lecture 10 Network Information Theory (CT15); a focus on channel capacity results The (two-user) multiple access channel (15.3) The (two-user) broadcast channel (15.6) The relay channel

More information

2016 Spring: The Final Exam of Digital Communications

2016 Spring: The Final Exam of Digital Communications 2016 Spring: The Final Exam of Digital Communications The total number of points is 131. 1. Image of Transmitter Transmitter L 1 θ v 1 As shown in the figure above, a car is receiving a signal from a remote

More information

Lecture 15: Thu Feb 28, 2019

Lecture 15: Thu Feb 28, 2019 Lecture 15: Thu Feb 28, 2019 Announce: HW5 posted Lecture: The AWGN waveform channel Projecting temporally AWGN leads to spatially AWGN sufficiency of projection: irrelevancy theorem in waveform AWGN:

More information

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours

More information

Problem Set 3 Due Oct, 5

Problem Set 3 Due Oct, 5 EE6: Random Processes in Systems Lecturer: Jean C. Walrand Problem Set 3 Due Oct, 5 Fall 6 GSI: Assane Gueye This problem set essentially reviews detection theory. Not all eercises are to be turned in.

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Square Root Raised Cosine Filter

Square Root Raised Cosine Filter Wireless Information Transmission System Lab. Square Root Raised Cosine Filter Institute of Communications Engineering National Sun Yat-sen University Introduction We consider the problem of signal design

More information

Power Spectral Density of Digital Modulation Schemes

Power Spectral Density of Digital Modulation Schemes Digital Communication, Continuation Course Power Spectral Density of Digital Modulation Schemes Mikael Olofsson Emil Björnson Department of Electrical Engineering ISY) Linköping University, SE-581 83 Linköping,

More information

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Optimal Sequences and Sum Capacity of Synchronous CDMA Systems Pramod Viswanath and Venkat Anantharam {pvi, ananth}@eecs.berkeley.edu EECS Department, U C Berkeley CA 9470 Abstract The sum capacity of

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Lecture 7 MIMO Communica2ons

Lecture 7 MIMO Communica2ons Wireless Communications Lecture 7 MIMO Communica2ons Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Fall 2014 1 Outline MIMO Communications (Chapter 10

More information

Diversity Multiplexing Tradeoff in ISI Channels Leonard H. Grokop, Member, IEEE, and David N. C. Tse, Senior Member, IEEE

Diversity Multiplexing Tradeoff in ISI Channels Leonard H. Grokop, Member, IEEE, and David N. C. Tse, Senior Member, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 1, JANUARY 2009 109 Diversity Multiplexing Tradeoff in ISI Channels Leonard H Grokop, Member, IEEE, and David N C Tse, Senior Member, IEEE Abstract The

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia

More information

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel Solutions to Homework Set #4 Differential Entropy and Gaussian Channel 1. Differential entropy. Evaluate the differential entropy h(x = f lnf for the following: (a Find the entropy of the exponential density

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture) ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling

More information

Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems

Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems Department of Electrical Engineering, College of Engineering, Basrah University Basrah Iraq,

More information

Projects in Wireless Communication Lecture 1

Projects in Wireless Communication Lecture 1 Projects in Wireless Communication Lecture 1 Fredrik Tufvesson/Fredrik Rusek Department of Electrical and Information Technology Lund University, Sweden Lund, Sept 2018 Outline Introduction to the course

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Parameter Estimation

Parameter Estimation 1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Applications of Lattices in Telecommunications

Applications of Lattices in Telecommunications Applications of Lattices in Telecommunications Dept of Electrical and Computer Systems Engineering Monash University amin.sakzad@monash.edu Oct. 2013 1 Sphere Decoder Algorithm Rotated Signal Constellations

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Estimation of the Capacity of Multipath Infrared Channels

Estimation of the Capacity of Multipath Infrared Channels Estimation of the Capacity of Multipath Infrared Channels Jeffrey B. Carruthers Department of Electrical and Computer Engineering Boston University jbc@bu.edu Sachin Padma Department of Electrical and

More information

MODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie

MODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie MODULATION AND CODING FOR QUANTIZED CHANNELS Xiaoying Shao and Harm S. Cronie x.shao@ewi.utwente.nl, h.s.cronie@ewi.utwente.nl University of Twente, Faculty of EEMCS, Signals and Systems Group P.O. box

More information

Lecture 6 Channel Coding over Continuous Channels

Lecture 6 Channel Coding over Continuous Channels Lecture 6 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 9, 015 1 / 59 I-Hsiang Wang IT Lecture 6 We have

More information

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz. CHAPTER 4 Problem 4. : Based on the info about the scattering function we know that the multipath spread is T m =ms, and the Doppler spread is B d =. Hz. (a) (i) T m = 3 sec (ii) B d =. Hz (iii) ( t) c

More information

Limited Feedback in Wireless Communication Systems

Limited Feedback in Wireless Communication Systems Limited Feedback in Wireless Communication Systems - Summary of An Overview of Limited Feedback in Wireless Communication Systems Gwanmo Ku May 14, 17, and 21, 2013 Outline Transmitter Ant. 1 Channel N

More information

Error Exponent Region for Gaussian Broadcast Channels

Error Exponent Region for Gaussian Broadcast Channels Error Exponent Region for Gaussian Broadcast Channels Lihua Weng, S. Sandeep Pradhan, and Achilleas Anastasopoulos Electrical Engineering and Computer Science Dept. University of Michigan, Ann Arbor, MI

More information

PSK bit mappings with good minimax error probability

PSK bit mappings with good minimax error probability PSK bit mappings with good minimax error probability Erik Agrell Department of Signals and Systems Chalmers University of Technology 4196 Göteborg, Sweden Email: agrell@chalmers.se Erik G. Ström Department

More information

IEEE C80216m-09/0079r1

IEEE C80216m-09/0079r1 Project IEEE 802.16 Broadband Wireless Access Working Group Title Efficient Demodulators for the DSTTD Scheme Date 2009-01-05 Submitted M. A. Khojastepour Ron Porat Source(s) NEC

More information

Optimization of Modulation Constrained Digital Transmission Systems

Optimization of Modulation Constrained Digital Transmission Systems University of Ottawa Optimization of Modulation Constrained Digital Transmission Systems by Yu Han A thesis submitted in fulfillment for the degree of Master of Applied Science in the Faculty of Engineering

More information

Principles of Coded Modulation. Georg Böcherer

Principles of Coded Modulation. Georg Böcherer Principles of Coded Modulation Georg Böcherer Contents. Introduction 9 2. Digital Communication System 2.. Transmission System............................. 2.2. Figures of Merit................................

More information

L interférence dans les réseaux non filaires

L interférence dans les réseaux non filaires L interférence dans les réseaux non filaires Du contrôle de puissance au codage et alignement Jean-Claude Belfiore Télécom ParisTech 7 mars 2013 Séminaire Comelec Parts Part 1 Part 2 Part 3 Part 4 Part

More information

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0 Problem 6.15 : he received signal-plus-noise vector at the output of the matched filter may be represented as (see (5-2-63) for example) : r n = E s e j(θn φ) + N n where θ n =0,π/2,π,3π/2 for QPSK, and

More information

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k. Kevin Buckley - -4 ECE877 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set Review of Digital Communications, Introduction to

More information

Chapter I: Fundamental Information Theory

Chapter I: Fundamental Information Theory ECE-S622/T62 Notes Chapter I: Fundamental Information Theory Ruifeng Zhang Dept. of Electrical & Computer Eng. Drexel University. Information Source Information is the outcome of some physical processes.

More information