Digital Modulation 1

Similar documents
SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

a) Find the compact (i.e. smallest) basis set required to ensure sufficient statistics.

Digital Transmission Methods S

Digital Band-pass Modulation PROF. MICHAEL TSAI 2011/11/10

EE4304 C-term 2007: Lecture 17 Supplemental Slides

Chapter 2 Signal Processing at Receivers: Detection Theory

One Lesson of Information Theory

Modulation & Coding for the Gaussian Channel

A First Course in Digital Communications

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Data Detection for Controlled ISI. h(nt) = 1 for n=0,1 and zero otherwise.

CHANNEL CAPACITY CALCULATIONS FOR M ARY N DIMENSIONAL SIGNAL SETS

Performance of small signal sets

that efficiently utilizes the total available channel bandwidth W.

ELEC546 Review of Information Theory

EE 574 Detection and Estimation Theory Lecture Presentation 8

Chapter 9 Fundamental Limits in Information Theory

Parameter Estimation

Revision of Lecture 5

Signal Design for Band-Limited Channels

BASICS OF DETECTION AND ESTIMATION THEORY

Summary: SER formulation. Binary antipodal constellation. Generic binary constellation. Constellation gain. 2D constellations

Lecture 15: Thu Feb 28, 2019

Lecture 12. Block Diagram

A Systematic Description of Source Significance Information

Digital Modulation 2

ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson

A Simple Example Binary Hypothesis Testing Optimal Receiver Frontend M-ary Signal Sets Message Sequences. possible signals has been transmitted.

Kevin Buckley a i. communication. information source. modulator. encoder. channel. encoder. information. demodulator decoder. C k.

Constellation Shaping for Communication Channels with Quantized Outputs

This examination consists of 10 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

Carrier frequency estimation. ELEC-E5410 Signal processing for communications

Principles of Communications

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

Es e j4φ +4N n. 16 KE s /N 0. σ 2ˆφ4 1 γ s. p(φ e )= exp 1 ( 2πσ φ b cos N 2 φ e 0

Constellation Shaping for Communication Channels with Quantized Outputs

Direct-Sequence Spread-Spectrum

ECE 564/645 - Digital Communications, Spring 2018 Midterm Exam #1 March 22nd, 7:00-9:00pm Marston 220

392D: Coding for the AWGN Channel Wednesday, January 24, 2007 Stanford, Winter 2007 Handout #6. Problem Set 2 Solutions

The Performance of Quaternary Amplitude Modulation with Quaternary Spreading in the Presence of Interfering Signals

Supplementary Figure 1: Scheme of the RFT. (a) At first, we separate two quadratures of the field (denoted by and ); (b) then, each quadrature

8 PAM BER/SER Monte Carlo Simulation

Performance Analysis of Spread Spectrum CDMA systems

Soft-Output Trellis Waveform Coding

Basic Principles of Video Coding

Chapter 7: Channel coding:convolutional codes

EE5713 : Advanced Digital Communications

Coding theory: Applications

Introduction to Convolutional Codes, Part 1

List Decoding: Geometrical Aspects and Performance Bounds

MODULATION AND CODING FOR QUANTIZED CHANNELS. Xiaoying Shao and Harm S. Cronie

Revision of Lecture 4

Communication Theory Summary of Important Definitions and Results

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Solutions to Selected Problems

Bounds on Mutual Information for Simple Codes Using Information Combining

Summary II: Modulation and Demodulation

Information Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals

Review of Doppler Spread The response to exp[2πift] is ĥ(f, t) exp[2πift]. ĥ(f, t) = β j exp[ 2πifτ j (t)] = exp[2πid j t 2πifτ o j ]

Outline Digital Communications. Lecture 02 Signal-Space Representation. Energy Signals. MMSE Approximation. Pierluigi SALVO ROSSI

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

Appendix B Information theory from first principles

EE 661: Modulation Theory Solutions to Homework 6

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Analysis of methods for speech signals quantization

Chapter [4] "Operations on a Single Random Variable"

Computation of Bit-Error Rate of Coherent and Non-Coherent Detection M-Ary PSK With Gray Code in BFWA Systems

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Memo of J. G. Proakis and M Salehi, Digital Communications, 5th ed. New York: McGraw-Hill, Chenggao HAN

Flat Rayleigh fading. Assume a single tap model with G 0,m = G m. Assume G m is circ. symmetric Gaussian with E[ G m 2 ]=1.

Message Passing Algorithm and Linear Programming Decoding for LDPC and Linear Block Codes

Rapport technique #INRS-EMT Exact Expression for the BER of Rectangular QAM with Arbitrary Constellation Mapping

Shannon Information Theory

Physical Layer and Coding

EE456 Digital Communications

PSK bit mappings with good minimax error probability

Square Root Raised Cosine Filter

Block 2: Introduction to Information Theory

A Family of Nyquist Filters Based on Generalized Raised-Cosine Spectra

LECTURE 16 AND 17. Digital signaling on frequency selective fading channels. Notes Prepared by: Abhishek Sood

Revision of Lecture 4

CHAPTER 14. Based on the info about the scattering function we know that the multipath spread is T m =1ms, and the Doppler spread is B d =0.2 Hz.

ECE Information theory Final (Fall 2008)

EE401: Advanced Communication Theory

Improved Detected Data Processing for Decision-Directed Tracking of MIMO Channels

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

3.9 Diversity Equalization Multiple Received Signals and the RAKE Infinite-length MMSE Equalization Structures

5. Pilot Aided Modulations. In flat fading, if we have a good channel estimate of the complex gain gt, ( ) then we can perform coherent detection.

Example: Bipolar NRZ (non-return-to-zero) signaling

Lecture 7 MIMO Communica2ons

Detecting Parametric Signals in Noise Having Exactly Known Pdf/Pmf

EE-597 Notes Quantization

ρ = sin(2π ft) 2π ft To find the minimum value of the correlation, we set the derivative of ρ with respect to f equal to zero.

ELEC E7210: Communication Theory. Lecture 10: MIMO systems

Capacity and Capacity-Achieving Input Distribution of the Energy Detector

Projects in Wireless Communication Lecture 1

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Transcription:

Digital Modulation 1 Lecture Notes Ingmar Land and Bernard H. Fleury Navigation and Communications () Department of Electronic Systems Aalborg University, DK Version: February 5, 27

i Contents I Basic Concepts 1 1 The Basic Constituents of a Digital Communication System 1.1 Discrete Information Sources............ 4 1.2 Considered System Model.............. 6 1.3 The Digital Transmitter............... 1 1.3.1 Waveform Look-up Table.......... 1 1.3.2 Waveform Synthesis............. 12 1.3.3 Canonical Decomposition.......... 13 1.4 The Additive White Gaussian-Noise Channel.... 15 1.5 The Digital Receiver................. 17 1.5.1 Bank of Correlators............. 17 1.5.2 Canonical Decomposition.......... 18 1.5.3 Analysis of the correlator outputs...... 2 1.5.4 Statistical Properties of the Noise Vector.. 23 1.6 The AWGN Vector Channel............. 27 1.7 Vector Representation of a Digital Communication System 28 1.8 Signal-to-Noise Ratio for the AWGN Channel... 31 2 Optimum Decoding for the AWGN Channel 33 2.1 Optimality Criterion................. 33 2.2 Decision Rules.................... 34 2.3 Decision Regions................... 36

ii 2.4 Optimum Decision Rules.............. 37 2.5 MAP Symbol Decision Rule............. 39 2.6 ML Symbol Decision Rule.............. 41 2.7 ML Decision for the AWGN Channel........ 42 2.8 Symbol-Error Probability.............. 44 2.9 Bit-Error Probability................. 46 2.1 Gray Encoding.................... 48 II Digital Communication Techniques 5 3 Pulse Amplitude Modulation (PAM) 51 3.1 BPAM........................ 51 3.1.1 Signaling Waveforms............. 51 3.1.2 Decomposition of the Transmitter...... 52 3.1.3 ML Decoding................. 53 3.1.4 Vector Representation of the Transceiver.. 54 3.1.5 Symbol-Error Probability.......... 55 3.1.6 Bit-Error Probability............. 57 3.2 MPAM........................ 58 3.2.1 Signaling Waveforms............. 58 3.2.2 Decomposition of the Transmitter...... 59 3.2.3 ML Decoding................. 62 3.2.4 Vector Representation of the Transceiver.. 63 3.2.5 Symbol-Error Probability.......... 64 3.2.6 Bit-Error Probability............. 66

iii 4 Pulse Position Modulation (PPM) 67 4.1 BPPM........................ 67 4.1.1 Signaling Waveforms............. 67 4.1.2 Decomposition of the Transmitter...... 68 4.1.3 ML Decoding................. 69 4.1.4 Vector Representation of the Transceiver.. 71 4.1.5 Symbol-Error Probability.......... 72 4.1.6 Bit-Error Probability............. 73 4.2 MPPM........................ 74 4.2.1 Signaling Waveforms............. 74 4.2.2 Decomposition of the Transmitter...... 75 4.2.3 ML Decoding................. 77 4.2.4 Symbol-Error Probability.......... 78 4.2.5 Bit-Error Probability............. 8 5 Phase-Shift Keying (PSK) 85 5.1 Signaling Waveforms................. 85 5.2 Signal Constellation................. 87 5.3 ML Decoding..................... 9 5.4 Vector Representation of the MPSK Transceiver.. 93 5.5 Symbol-Error Probability.............. 94 5.6 Bit-Error Probability................. 97 6 Offset Quadrature Phase-Shift Keying (OQPSK)1 6.1 Signaling Scheme................... 1 6.2 Transceiver...................... 11 6.3 Phase Transitions for QPSK and OQPSK..... 12

iv 7 Frequency-Shift Keying (FSK) 14 7.1 BFSK with Coherent Demodulation......... 14 7.1.1 Signal Waveforms.............. 14 7.1.2 Signal Constellation............. 16 7.1.3 ML Decoding................. 17 7.1.4 Vector Representation of the Transceiver.. 18 7.1.5 Error Probabilities.............. 19 7.2 MFSK with Coherent Demodulation........ 11 7.2.1 Signal Waveforms.............. 11 7.2.2 Signal Constellation............. 111 7.2.3 Vector Representation of the Transceiver.. 112 7.2.4 Error Probabilities.............. 113 7.3 BFSK with Noncoherent Demodulation....... 114 7.3.1 Signal Waveforms.............. 114 7.3.2 Signal Constellation............. 115 7.3.3 Noncoherent Demodulator.......... 116 7.3.4 Conditional Probability of Received Vector. 117 7.3.5 ML Decoding................. 119 7.4 MFSK with Noncoherent Demodulation...... 121 7.4.1 Signal Waveforms.............. 121 7.4.2 Signal Constellation............. 122 7.4.3 Conditional Probability of Received Vector. 123 7.4.4 ML Decision Rule.............. 123 7.4.5 Optimal Noncoherent Receiver for MFSK.. 124 7.4.6 Symbol-Error Probability.......... 125 7.4.7 Bit-Error Probability............. 129

v 8 Minimum-Shift Keying (MSK) 13 8.1 Motivation...................... 13 8.2 Transmit Signal................... 132 8.3 Signal Constellation................. 135 8.4 A Finite-State Machine............... 138 8.5 Vector Encoder................... 14 8.6 Demodulator..................... 144 8.7 Conditional Probability Density of Received Sequence145 8.8 ML Sequence Estimation.............. 146 8.9 Canonical Decomposition of MSK.......... 149 8.1 The Viterbi Algorithm................ 15 8.11 Simplified ML Decoder............... 155 8.12 Bit-Error Probability................. 16 8.13 Differential Encoding and Differential Decoding.. 161 8.14 Precoded MSK.................... 162 8.15 Gaussian MSK.................... 166 9 Quadrature Amplitude Modulation 169 1 BPAM with Bandlimited Waveforms 17 III Appendix 171

vi A Geometrical Representation of Signals 172 A.1 Sets of Orthonormal Functions........... 172 A.2 Signal Space Spanned by an Orthonormal Set of Functions173 A.3 Vector Representation of Elements in the Vector Space174 A.4 Vector Isomorphism................. 175 A.5 Scalar Product and Isometry............ 176 A.6 Waveform Synthesis................. 178 A.7 Implementation of the Scalar Product........ 179 A.8 Computation of the Vector Representation..... 181 A.9 Gram-Schmidt Orthonormalization......... 182 B Orthogonal Transformation of a White Gaussian Noise Vec B.1 The Two-Dimensional Case............. 185 B.2 The General Case.................. 186 C The Shannon Limit 188 C.1 Capacity of the Band-limited AWGN Channel... 188 C.2 Capacity of the Band-Unlimited AWGN Channel. 191

vii References [1] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd ed. Prentice-Hall, 22. [2] B. Rimoldi, A decomposition approach to CPM, IEEE Trans. Inform. Theory, vol. 34, no. 2, pp. 26 27, Mar. 1988.

1 Part I Basic Concepts

2 ents 1 The Basic Constituents of a Digital Communication System Block diagram of a digital communication system: Binary Information Source Digital Transmitter Information Source Source Encoder U k 1/T b Vector Encoder Waveform Modulator X(t) 1/T Waveform Channel Binary Information Sink Digital Receiver 1/T b Information Source Vector Waveform Sink Decoder Decoder Demodulator Û k 1/T Y (t) Time duration of one binary symbol (bit) U k : T b Time duration of one waveform X(t): T Waveform channel may introduce - distortion, - interference, - noise Goal: Signal of information source and signal at information sink should be as similar as possible according to some measure.

3 Some properties: Information source: may be analog or digital Source encoder: may perform sampling, quantization and compression; generates one binary symbol (bit) U k {, 1} per time interval T b Waveform modulator: generates one waveform x(t) per time interval T Vector decoder: generates one binary symbol (bit) Ûk {, 1} (estimates of U k ) per time interval T b Source decoder: reconstructs the original source signal

4 1.1 Discrete Information Sources A discrete memoryless source (DMS) is a source that generates a sequence U 1, U 2,... of independent and identically distributed (i.i.d.) discrete random variables, called an i.i.d. sequence. PSfrag replacements DMS U 1, U 2,... Properties of the output sequence: discrete U 1, U 2,... {a 1, a 2,..., a Q }. memoryless U 1, U 2,... are statistically independent. stationary U 1, U 2,... are identically distributed. Notice: The symbol alphabet {a 1, a 2,..., a Q } has cardinality Q. The value of Q may be infinite, the elements of the alphabet only have to be countable. Example: Discrete sources (i) Output of the PC keyboard, SMS (usually not memoryless). (ii) Compressed file (near memoryless). (iii) The numbers drawn on a roulette table in a casino (ought to be memoryless, but may not... ).

5 A DMS is statistically completely described by the probabilities p U (a q ) = Pr(U = a q ) of the symbols a q, q = 1, 2,..., Q. Notice that (i) p U (a q ) for all q = 1, 2,..., Q, and (ii) Q p U (a q ) = 1. q=1 As the sequence is i.i.d., we have Pr(U i = a q ) = Pr(U j = a q ). For convenience, we may write p U (a) shortly as p(a). Binary Symmetric Source A binary memoryless source is a DMS with a binary symbol alphabet. Remark: Commonly used binary symbol alphabets are F 2 := {, 1} and B := { 1, +1}. In the following, we will use F 2. A binary symmetric source (BSS) is a binary memoryless source with equiprobable symbols, p U () = p U (1) = 1 2. Example: Binary Symmetric Source All length-k sequences of a BSS have the same probability p U1 U 2...U K (u 1 u 2... u K ) = K p U (u i ) = i=1 ( 1 2 ) K.

6 1.2 Considered System Model This course focuses on the transmitter and the receiver. Therefore, we replace the information source by a BSS Sfrag replacements Information Source Source Encoder U 1, U 2,... Binary Decoder Sink BSS U 1, U 2,... and the information sink by a binary (information) sink. Sfrag replacements BSS Information Sink Source Decoder Û 1, Û2,... Encoder U Binary 1, U 2,... Û 1, Sink Û2,...

7 Some Remarks (i) There is no loss of generality resulting from these substitutions. Indeed it can be demonstrated within Shannon s Information Theory that an efficient source encoder converts the output of an information source into a random binary independent and uniformly distributed (i.u.d.) sequence. Thus, the output of a perfect source encoder looks like the output of a BSS. (ii) Our main concern is the design of communication systems for reliable transmission of the output symbols of a BSS. We will not address the various methods for efficient source encoding.

8 Digital communication system considered in this course: BSS U Digital Transmitter X(t) bit rate 1/T b rate 1/T Waveform Channel Binary Sink Û Digital Receiver Y (t) Source: U = [U 1,..., U K ], U i {, 1} Transmitter: X(t) = x(t, u) Sink: Û = [Û1,..., ÛK], Ûi {, 1} Bit rate: 1/T b Rate (of waveforms): 1/T = 1/(KT b ) Bit error probability P b = 1 K K Pr(U k Ûk) k=1 Objective: Design of efficient digital communication systems Efficiency means { small bit error probability and high bit rate 1/T b.

9 Constraints and limitations: limited power limited bandwidth impairments (distortion, interference, noise) of the channel Design goals: good waveforms low-complexity transmitters low-complexity receivers

1 1.3 The Digital Transmitter PSfrag replacements 1.3.1 Waveform PSfrag replacements Look-up Table PSfrag replacements Example: 4PPM (Pulse-Position Modulation) placements Set of four different waveforms, S = {s 1 (t), s 2 (t), s 3 (t), s 4 (t)}: s 1 (t) A T s 2 (t) A s 1 (t) t T s 3 (t) s 1 (t) A s 2 (t) t T s 1 (t) s 4 (t) s 2 (t) A s 3 (t) t Each waveform X(t) S is addressed by vector [U 1, U 2 ] {, 1} 2 : [U 1, U 2 ] X(t). The mapping may be implemented by a waveform look-up table. PSfrag replacements T t 4PPM Transmitter [U 1, U 2 ] [] s 1 (t) [1] s 2 (t) X(t) PSfrag replacements [1] s 3 (t) [11] s 4 (t) Example: [111] is transmitted as x(t) A T 2T 3T 4T t

11 ag replacements For an arbitrary digital transmitter, we have the following: Digital Transmitter [U 1, U 2,..., U K ] X(t) Waveform Look-up Table Input Binary vectors of length K from the input set U: [U 1, U 2,..., U K ] U := {, 1} K. The duration of one binary symbol U k is T b. Output Waveforms of duration T from the output set S: x(t) S := {s 1 (t), s 2 (t),..., s M (t)}. Waveform duration T means that for m = 1, 2,..., M, s m (t) = for t / [, T ]. The look-up table maps each input vector [u 1,..., u K ] U to one waveform x(t) S. Thus the digital transmitter may defined by a mapping U S with [u 1,..., u K ] x(t). The mapping is one-to-one and onto such that M = 2 K. Relation between signaling interval (waveform duration) T and bit interval ( bit duration ) T b : T = K T b.

12 1.3.2 Waveform Synthesis The set of waveforms, S := {s 1 (t), s 2 (t),..., s M (t)}, spans a vector space. Applying the Gram-Schmidt orthogonalization procedure, we can find a set of orthonormal functions S ψ := {ψ 1 (t), ψ 2 (t),..., ψ D (t)} with D M such that the space spanned by S ψ contains the space spanned by S. Hence, each waveform s m (t) can be represented by a linear combination of the orthonormal functions: s m (t) = D s m,i ψ i (t), i=1 m = 1, 2,..., M. Each signal s m (t) can thus be geometrically represented by the D-dimensional vector with respect to the set S ψ. s m = [s m,1, s m,2,..., s m,d ] T R D Further details are given in Appendix A.

13 placements 1.3.3 Canonical Decomposition The digital transmitter may be represented by a waveform look-up table: u = [u 1,..., u K ] x(t), where u U and x(t) S. From the previous section, we know how to synthesize the waveform s m (t) from s m (see also Appendix A) with respect to a set of basis functions S ψ := {ψ 1 (t), ψ 2 (t),..., ψ D (t)}. Making use of this method, we can split the waveform look-up table into a vector look-up table and a waveform synthesizer: where u = [u 1,..., u K ] x = [x 1, x 2,..., x D ] x(t), u U = {, 1} K, x X = {s 1, s 2,..., s D } R D, x(t) S = {s 1 (t), s 2 (t),..., s M (t)}. This splitting procedure leads to the sought canonical decomposition of a digital transmitter: [U 1,..., U K ] Vector Encoder Vector Look-up Table X 1 Waveform Modulator ψ 1 (t) X(t) [... ] s 1 X D [1... 1] s M ψ D (t)

PSfrag replacements PSfrag replacements PSfrag replacements placements Example: 4PPM The four 2/ 2/ 2/ T lacements T r Encoder orthonormal T basis functions are ψ 1 (t) Vector 2/ ψ 1 (t) ψ 1 (t) ψ ψ 1 (t) 2(t) ψ 3 (t) ψ 4 (t) ψ 2 (t) odulator T ψ 2 (t) ψ 3 (t) t t t T T T T t 14 The canonical decomposition of the receiver is the following, where Es = A T /2. [U 1, U 2 ] Vector Encoder Vector Look-up Table [] [ E s,,, ] T [1] [, E s,, ] T [1] [,, E s, ] T [11] [,,, E s ] T X 1 X 2 X 3 X 4 Waveform Modulator ψ 1 (t) ψ 2 (t) ψ 3 (t) X(t) ψ 4 (t) Remarks: - The signal energy of each basis function is equal to one: ψd (t)dt = 1 (due to their construction). - The basis functions are orthonormal (due to their construction). - The value E s is chosen such that the synthesized signals have the same energy as the original signals, namely E s (compare Example 3).

15 1.4 The Additive White Gaussian-Noise Channel PSfrag replacements W (t) X(t) Y (t) The additive white Gaussian noise (AWGN) channel-model is widely used in communications. The transmitted signal X(t) is superimposed by a stochastic noise signal W (t), such that the transmitted signal reads Y (t) = X(t) + W (t). The stochastic process W (t) is a stationary process with the following properties: (i) W (t) is a Gaussian process, i.e., for each time t, the samples v = w(t) are Gaussian distributed with zero mean and variance σ 2 : p V (v) = 1 2πσ 2 exp( v2 2σ 2 ). (ii) W (t) has a flat power spectrum with height N /2: S W (f) = N 2 (Therefore it is called white.) (iii) W (t) has the autocorrelation function R W (τ) = N 2 δ(τ).

16 Remarks 1. Autocorrelation function and power spectrum: R W (τ) = E [ W (t)w (t + τ) ], S W (f) = F { R W (τ) }. 2. For Gaussian processes, weak stationarity implies strong stationarity. 3. An WGN is an idealized process without physical reality: The process is so wild that its realizations are not ordinary functions of time. Its power is infinite. However, a WGN is a useful approximation of a noise with a flat power spectrum in the bandwidth B used by a communication system: Sfrag replacements S noise (f) N /2 f B f B f 4. In satellite communications, W (t) is the thermal noise of the receiver front-end. In this case, N /2 is proportional to the squared temperature.

17 1.5 The Digital Receiver Consider a transmission system using the waveforms { } S = s 1 (t), s 2 (t),..., s M (t) with s m (t) = for t / [, T ], m = 1, 2,..., M, i.e., with duration T. Assume transmission over an AWGN channel, such that x(t) S. y(t) = x(t) + w(t), 1.5.1 Bank of Correlators The transmitted waveform may be recovered from the received waveform y(t) by correlating y(t) with all possible waveforms: placements c m = y(t), s m (t), m = 1, 2,..., M. Based on the correlations [c 1,..., c M ], the waveform ŝ(t) with the highest correlation is chosen and the corresponding (estimated) source vector û = [û 1,..., û K ] is output. y(t) s 1 (t) s M (t) T (.)dt c 1 T (.)dt c M estimation of ˆx(t) and table look-up [û 1,..., û K ] Disadvantage: High complexity.

18 1.5.2 Canonical Decomposition Consider a set of orthonormal functions { } S ψ = ψ 1 (t), ψ 2 (t),..., ψ D (t) obtained by applying the Gram-Schmidt procedure to s 1 (t), s 2 (t),..., s M (t). Then, s m (t) = m = 1, 2,..., M. The vector D s m,d ψ d (t), d=1 s m = [s m,1, s m,2,..., s m,d ] T entirely determines s m (t) with respect to the orthonormal set S ψ. The set S ψ spans the vector space S ψ := { s(t) = D s i ψ i (t) : [s 1, s 2,..., s D ] R }. D i=1 Basic Idea The received waveform y(t) may contain components outside of S ψ. However, only the components inside S ψ are relevant, as x(t) S ψ. Determine the vector representation y of the components of y(t) that are inside S ψ. This vector y is sufficient for estimating the transmitted waveform.

19 Canonical decomposition of the optimal receiver for the AWGN channel Appendix A describes two ways to compute the vector representation of a waveform. Accordingly, the demodulator may be implemented in two ways, leading to the following two receiver structures. replacements Correlator-based Demodulator and Vector Decoder T (.)dt Y 1 Y (t) ψ 1 (t) Vector Decoder [Û1,..., ÛK] T (.)dt Y D replacements ψ D (t) Matched-filter based Demodulator and Vector Decoder Y (t) MF Y 1 ψ 1 (T t) Vector Decoder [Û1,..., ÛK] MF ψ D (T t) Y D T The optimality of the receiver structures is shown in the following sections.

2 1.5.3 Analysis of the correlator outputs Assume that the waveform s m (t) is transmitted, i.e., x(t) = s m (t). The received waveform is thus y(t) = x(t) + w(t) = s m (t) + w(t). The correlator outputs are T y d = y(t)ψ d (t)dt = T [ sm (t) + w(t)] ψ d (t)dt = T s m (t)ψ d (t)dt + T w(t)ψ d (t)dt d = 1, 2,..., D. }{{} s m,d = s m,d + w d, }{{} w d Using the notation y = [y 1,..., y D ] T, w = [w 1,..., w D ] T, w d = T w(t)ψ d (t)dt, we can recast the correlator outputs in the compact form y = s m + w. The waveform channel between x(t) = s m (t) and y(t) is thus transformed into a vector channel between x = s m and y.

21 Remark: Since the vector decoder sees noisy observations of the vectors s m, these vectors are also called signal points with respect to S ψ, and the set of signal points is called the signal constellation. From Appendix A, we know that s m (t) entirely characterizes s m (t). The vector y, however, does not represent the complete waveform y(t); it represents only the waveform y (t) = D y d ψ d (t) d=1 which is the part of y(t) within the vector space spanned by the set S ψ of orthonormal functions, i.e., y (t) S ψ. The remaining part of y(t), i.e., the part outside of S ψ, reads y (t) = y(t) y (t) D = y(t) y d ψ d (t) d=1 = s m (t) + w(t) = s m (t) D [s m,d + w d ]ψ d (t) d=1 D s m,d ψ d (t) +w(t) d=1 } {{ = } D = w(t) w d ψ d (t). d=1 D w d ψ d (t) d=1

22 Observations The waveform y (t) contains no information about s m (t). The waveform y (t) is completely represented by y. Therefore, the receiver structures do not lead to a loss of information, and so they are optimal.

23 1.5.4 Statistical Properties of the Noise Vector The components of the noise vector w = [w 1,..., w D ] T are given by w d = T w(t)ψ d (t)dt, d = 1, 2,..., D. The random variables W d are Gaussian as they result from a linear transformation of the Gaussian process W (t). Hence the probability density function of W d is of the form p Wd (w d ) = 1 exp ( (w d µ d ) 2 ) 2πσ 2. d where the mean value µ d and the variance σ 2 d 2σ 2 d are given by µ d = E [ W d ], σ 2 d = E [ (W d µ d ) 2]. These values are now determined. Mean value Using the definition of W d and taking into account that the process W (t) has zero mean, we obtain E [ ] [ T W d = E ] W (t)ψ d (t)dt = T E [ W (t) ] ψ d (t)dt =. Thus, all random variables W d have zero mean.

24 Variance For computing the variance, we use µ d =, the definition of W d, and the autocorrelation function of W (t), R W (τ) = N 2 δ(τ). We obtain the following chain of equations: E [ (W d µ d ) 2] = E [ ] Wd 2 [ T ( = E W (t)ψ d (t)dt )( T W (t )ψ d (t )dt )] = T T = N 2 = N 2 T T E [ W (t)w (t ) ] ψ }{{} d (t)ψ d (t )dtdt R W (t t) [ T ] δ(t t )ψ d (t )dt ψ d (t)dt } {{ } δ(t) ψ d (t) = ψ d (t) ψ 2 d(t)dt = N 2. Distribution of the noise vector components Thus we have µ d = and σ 2 d = N /2, and the distributions read p Wd (w d ) = 1 πn exp ( w2 d N ). for d = 1, 2,..., D. Notice that the components W d are all identically distributed.

25 As the Gaussian distribution is very important, the shorthand notation A N (µ, σ 2 ) is often used. This notation means: The random variable A is Gaussian distributed with mean value µ and variance σ 2. Using this notation, we have for the components of the noise vector d = 1, 2,..., D. W d N (, N /2), Distribution of the noise vector Using the same reasoning as before, we can conclude that W is a Gaussian noise vector, i.e., W N (µ, Σ), where the expectation µ R D 1 and the covariance matrix Σ R D D are given by µ = E [ W ], Σ = E [ (W µ)(w µ) T]. From the previous considerations, we have immediately µ =. Using a similar approach as for computing the variances, the statistical independence of the components of W can be shown. That is, E [ ] N W i W j = 2 δ i,j, where δ i,j denotes the Kronecker symbol defined as { 1 for i = j, δ i,j = otherwise.

26 As a consequence, the covariance matrix of W is Σ = N 2 I D, where I D denotes the D D identity matrix. To sum up, the noise vector W is distributed as ( W N, N ) 2 I D. As the components of W are statistically independent, the probability density function of W can be written as p W (w) = = = D p Wd (w d ) d=1 D d=1 1 πn exp ( w2 d N ) ( 1 πn ) D exp ( 1 N D d=1 ) wd 2 = ( πn ) D/2 exp ( 1 N w 2) with w denoting the Euclidean norm w = ( D d=1 ) 1/2 wd 2 of the D-dimensional vector w = [w 1,..., w D ] T. The independency of the components of W give rise to the AWGN vector channel.

27 1.6 The AWGN Vector Channel The additive white Gaussian noise vector channel is defined as a channel with input vector output vector and the relation where X = [X 1, X 2,..., X D ] T, Y = [Y 1, Y 2,..., Y D ] T, Y = X + W, W = [W 1, W 2,..., W D ] T is a vector of D independent random variables distributed as W d N (, N /2). PSfrag replacements W X Y (As nobody would have been expected differently... )

28 1.7 Vector Representation of a Digital Communication System Consider a digital communication system transmitting over an AWGN channel. Let s 1 (t), s 2 (t),..., s M (t), s m (t) = for t / [, T ], m = 1, 2,..., M, denote the waveforms. Let further S ψ = { ψ 1 (t), ψ 2 (t),..., ψ D (t) } denote a set of orthonormal functions for these waveforms. The canonical decompositions of the digital transmitter and the digital receiver convert the block formed by the waveform modulator, the AWGN channel, and the waveform demodulator into an AWGN vector channel. lacements Modulator, AWGN channel, demodulator X 1 W (t) T (.)dt Y 1 ψ 1 (t) X(t) Y (t) ψ 1 (t) X D T (.)dt Y D ψ D (t) ψ D (t) AWGN vector channel PSfrag replacements [W 1,..., W D ] [X 1,..., X D ] [Y 1,..., Y D ]

lacements Using this equivalence, the communication system operating over a (waveform) AWGN channel can be represented as a communication system operating over an AWGN vector channel: 29 BSS U Vector Encoder X W Binary Sink Û Vector Decoder Y AWGN Vector Channel Source vector: U = [U 1,..., U K ], U i {, 1} Transmit vector: X = [X 1,..., X D ] Receive vector: Y = [Y 1,..., Y D ] Estimated source vector: Û = [Û1,..., ÛK], U i {, 1} AWGN vector: W = [W 1,..., W D ], ( W N, N ) 2 I D This is the vector representation of a digital communication system transmitting over an AWGN channel.

3 For estimating the transmitted vector x, we are interested in the likelihood of x for the given received vector y: p Y X (y x) = p W (y x) Symbolically, we may write = ( πn ) D/2 exp ( 1 N y x 2). Y X = s m N ( s m, N ) 2 I D. Remark: The likelihood of x is the conditional probability density function of y given x, and it is regarded as a function of x. Example: Signal buried in AWGN Consider D = 3 and x = s m. Then the Gaussian cloud around s m may look like this: PSfrag replacements ψ 2 s m ψ 1 ψ 3

31 1.8 Signal-to-Noise Ratio for the AWGN Channel Consider a digital communication system using the set of waveforms { } S = s 1 (t), s 2 (t),..., s M (t), s m (t) = for t / [, T ], m = 1, 2,..., M, for transmission over an AWGN channel with power spectrum S W (f) = N 2. Let E sm denote the energy of waveform s m (t), and let E s denote the average waveform energy: E sm = T ( sm (t) ) 2 dt, Es = 1 M M m=1 E sm Thus, E s is the average energy per waveform x(t) and also the average energy per vector x (due to the one-to-one correspondence.) The average signal-to-noise ratio (SNR) per waveform is then defined as γ s = E s N.

32 On the other hand, the average energy per source bit u k is of interest. Since K source bits u k are transmitted per waveform, the average energy per bit is E b = 1 K E s. Accordingly, the average signal-to-noise ratio (SNR) per bit is then defined as γ b = E b N. As K bits are transmitted per waveform, we have the relation γ b = 1 K γ s. Remark: The energies and signal-to-noise ratios usually denote received values.

lacements 2 Optimum Decoding for the AWGN Channel 2.1 Optimality Criterion In the previous chapter, the vector representation of a digital transceiver transmitting over an AWGN channel was derived. 33 BSS U Vector Encoder X W Binary Sink Û Vector Decoder Y AWGN Vector Channel Nomenclature u = [u 1,..., u K ] U, x = [x 1,..., x D ] X, û = [û 1,..., û K ] U, y = [y 1,..., y D ] R D, U = {, 1} K, X = {s 1,..., s M }. U : set of all binary sequences of length K, X : set of vector representations of the waveforms. The set X is called the signal constellation, and its elements are called signals, signal points, or modulation symbols. For convenience, we may refer to u k as bits and to x as symbols. (But we have to be careful not to cause ambiguity.)

34 As the vector representation implies no loss of information, recovering the transmitted block u = [u 1,..., u K ] from the received waveform y(t) is equivalent to recovering u from the vector y = [y 1,..., y D ]. Functionality of the vector encoder enc : u x = enc(u) U X Functionality of the vector decoder dec u : y û = dec u (y) R D U Optimality criterion A vector decoder is called an optimal vector decoder if it minimizes the block error probability Pr(U Û). 2.2 Decision Rules Since the encoder mapping is one-to-one, the functionality of the vector decoder can be decomposed into the following two operations: (i) Modulation-symbol decision: dec : y ˆx = dec(y) R D X (ii) Encoder inverse: enc 1 : ˆx û = enc 1 (ˆx) X U

lacements 35 Thus, we have the following chain of mappings: y dec ˆx enc 1 û. Block Diagram U U Vector Encoder enc X X W Û U Encoder Inverse enc 1 ˆX X Mod. Symbol Decision dec Y R D R D Vector Decoder g replacements A decision rule is a function which maps each observation y R D to a modulation symbol x X, i.e., to an element of the signal constellation X. Example: Decision rule R D y (1) y (2) { } s 1, s 2,..., s M = X y (3)

36 2.3 Decision Regions A decision rule dec : R D X defines a partition of R D into decision regions g replacements D m := { } y R D : dec(y) = s m, m = 1, 2,..., M. Example: Decision regions R D D 1... { } s 1, s 2,..., s M = X D M D 2 Since the decision regions D 1, D 2,..., D M form a partition, they have the following properties: (i) They are disjoint: D i D j = for i j. (ii) They cover the whole observation space R D : D 1 D 2 D M = R D. Obviously, any partition of R D defines a decision rule as well.

37 2.4 Optimum Decision Rules Optimality criterion for decision rules Since the mapping enc : U X of the vector encoder is one-toone, we have Û U ˆX X. Therefore, the decision rule of an optimal vector decoder minimizes the modulation-symbol error probability Pr( ˆX X). Such a decision rule is called optimal. Sufficient condition for an optimal decision rule Let dec : R D X be a decision rule which minimizes the a-posteriori probability of making a decision error, Pr ( dec(y) }{{} ˆx X Y = y ), for any observation y. Then dec is optimal. Remark: The a-posteriori probability of a decoding error is frag replacements the probability of the event {dec(y) X} conditioned on the event {Y = y}: W ˆX = dec(y) Mod. Symbol Decision dec Y = y X

38 Proof: Let dec : R D X denote any decision rule, and let dec opt : R D X denote an optimal decision rule. Then we have Pr ( dec(y ) X ) ( = Pr dec(y ) X Y = y ) p Y (y)dy = ( ) = R D R D R D ( Pr dec(y) X Y = y ) p Y (y)dy ( Pr decopt (y) X ) Y = y py (y)dy ( Pr decopt (Y ) X Y = y ) p Y (y)dy R D = Pr ( dec opt (Y ) X ). ( ): by definition of dec opt, we have Pr ( dec(y) X Y = y ) Pr ( dec opt (y) X Y = y ) for all x X.

39 2.5 MAP Symbol Decision Rule Starting with the necessary condition for an optimal decision rule given above, we derive now a method to make an optimal decision. The corresponding decision rule is called the maximum a-posteriori (MAP) decision rule. Consider an optimal decision rule dec : y ˆx = dec(y), where the estimated modulation-symbol is denoted by ˆx. As the decision rule is assumed to be optimum, the probability is minimal. Pr(ˆx X Y = y) For any x X, we have Pr(X x Y = y) = 1 Pr(X = x Y = y) = 1 p X Y (x y) = 1 p Y X(y x) p X (x), p Y (y) where Bayes rule has been used in the last line. Notice that Pr(X = x Y = y) = p X Y (x y) is the a-posteriori probability of {X = x} given {Y = y}.

4 Using the above equalities, we can conclude that Pr(X x Y = y) is minimal p X Y (x y) is maximal p Y X (y x) p X (x) is maximal, since p Y (y) does not depend on x. Thus, the estimated modulation-symbol ˆx = dec(y) resulting from the optimal decision rule satisfies the two equivalent conditions p X Y (ˆx y) p X Y (x y) for all x X; p Y X (y ˆx) p X (ˆx) p Y X (y x) p X (x) for all x X. These two conditions may be used to define the optimal decision rule. Maximum a-posteriori (MAP) decision rule: { } ˆx = dec MAP (y) := argmax p X Y (x y) x X := argmax x X { } p Y X (y x) p X (x). The decision rule is called the MAP decision rule as the selected modulation-symbol ˆx maximizes the a-posteriori probability density function p X Y (x y). Notice that the two formulations are equivalent.

41 2.6 ML Symbol Decision Rule If the modulation-symbols at the output of the vector encoder are equiprobable, i.e., if Pr(X = s m ) = 1 M for m = 1, 2,..., M, the MAP decision rule reduces to the following decision rule. Maximum likelihood (ML) decision rule: { } ˆx = dec ML (y) := argmax x X p Y X (y x). In this case, the the selected modulation-symbol ˆx maximizes the likelihood function p Y X (y x) of x.

42 2.7 ML Decision for the AWGN Channel Using the canonical decomposition of the transmitter and of the receiver, we obtain the AWGN vector-channel: PSfrag replacements ( ) W N, N 2 I D X X Y R D The conditional probability density function of Y given X = x X is p Y X (y x) = ( πn ) D/2 exp ( 1 N y x 2). Since exp(.) is a monotonically increasing function, we obtain the following formulation of the ML decision rule { } ˆx = dec ML (y) = argmax p Y X (y x) x X = argmax {exp ( 1 y x 2)} N x X = argmin x X } { y x 2. Thus, the ML decision rule for the AWGN vector channel selects the vector in the signal constellation X that is the closest one to the observation y, where the distance measure is the squared Euclidean distance.

43 Implementation of the ML decision rule The squared Euclidean distance can be expanded as y x 2 = y 2 2 y, x + x 2. The term y 2 is irrelevant for the minimization with respect to x. Thus we get the following equivalent formulation of the ML decision rule: } ˆx = dec ML (y) = argmin { 2 y, x + x 2 x X } = argmax {2 y, x x 2 x X = argmax x X } { y, x 12 x 2. Sfrag replacements Block diagram of the ML decision rule for the AWGN vectorchannel: y s 1. 1 2 s 1 2. select largest one ˆx s M 1 2 s M 2 PSfragVector replacements correlator: a R D a, b = D d=1 a db d R b R D

44 Remark: Some signal constellations X have the property that x 2 has the same value for all x X. In these cases (but only then), the ML decision rule simplifies further to ˆx = dec ML (y) = argmax x X { } y, x. 2.8 Symbol-Error Probability The error probability for modulation-symbols X, or for short, the symbol-error probability is defined as P s = Pr( ˆX X) = Pr(dec(Y ) X). The symbol-error probability coincides with the block error probability ) P s = Pr(Û ([Û1, U) = Pr..., ÛK] [U 1,..., U K ]. We compute P s in two steps: (i) Compute the conditional symbol-error probability given X = x for any x X: Pr( ˆX X X = x). (ii) Average over the signal constellation: Pr( ˆX X) = x X Pr( ˆX X X = x) Pr(X = x).

45 Remember: Decision regions for the symbol decision rule: R D D 1... { } s 1, s 2,..., s M = X D M D 2 Assume that x = s m is transmitted. Then, Pr( ˆX X X = s m ) = Pr( ˆX s m X = s m ) = Pr(Y / D m X = s m ) = 1 Pr(Y D m X = s m ). For the AWGN vector channel, the conditional probability density function of Y given X = s m is Hence, p Y X (y s m ) = ( πn ) D/2 exp ( 1 N y s m 2). Pr(Y D m X = s m ) = p Y X (y s m )dy = D m = ( ) D/2 ( 1 πn exp y s m 2) dy. N D m The symbol-error probability is then obtained by averaging over the signal constellation: Pr( ˆX M X) = 1 Pr(Y D m X = s m ) Pr(X = s m ). m=1

46 If the modulation symbols are equiprobable, the formula simplifies to Pr( ˆX X) = 1 1 M Pr(Y D m X = s m ). M m=1 2.9 Bit-Error Probability The error probability for the source bits U k, or for short, the biterror probability is defined as P b = 1 K K Pr(Ûk U k ), k=1 i.e., as the average number of erroneous bits in the block Û = [Û1,..., ÛK]. Relation between the bit-error probability and the symbol-error probability Clearly, the relation between P b and P s depends on the encoding function enc : u = [u 1,..., u k ] x = enc(u). The analytical derivation of the relation between P b and P s is usually cumbersome. However, we can derive an upper bound and a lower bound for P b when P s is given (and also vice versa). These bounds are valid for any encoding function. The lower bound provides a guideline for the design of a good encoding function.

47 Lower bound: If a symbol is erroneous, there is at least one erroneous bit in the block of K bits. Therefore P b 1 K P s. Upper bound: If a symbol is erroneous, there are at most K erroneous bits in the block of K bits. Therefore P b P s. Combination: Combining the lower bound and the upper bound yields 1 K P s P b P s. An efficient encoder should achieve the lower bound for the biterror probability, i.e., P b 1 K P s. Remark: The bounds can also be obtained by considering the union bound of the events {Û1 U 1 }, {Û2 U 2 },..., {ÛK U K }, i.e, by relating the probability ( ) Pr {Û1 U 1 } {Û2 U 2 }... {ÛK U K } to the probabilities Pr ( {Û1 U 1 } ), Pr ( {Û2 U 2 } ),..., Pr ( {ÛK U K } ).

48 2.1 Gray Encoding frag replacements In Gray encoding, blocks of bits, u = [u 1,..., u K ], which differ in only one bit u k are assigned to neighboring vectors (modulation symbols) x in the signal constellation X. Example: 4PAM (Pulse Amplitude Modulation) Signal constellation: X = {[ 3], [ 1], [+1], [+3]}. 1 11 1 s 1 2 s 2 s 3 2 s 4 Motivation When a decision error occurs, it is very likely that the transmitted symbol and the detected symbol are neighbors. When the bit blocks of neighboring symbols differ only in one bit, there will be only one bit error per symbol error. Effect Gray encoding achieves P b 1 K P s for high signal-to-noise ratios (SNR).

49 Sketch of the proof: We consider the case K = 2, i.e., U = [U 1, U 2 ]. P s = Pr( ˆX X) ( ) = Pr [Û1, Û2] [U 1, U 2 ] ( ) = Pr {Û1 U 1 } {Û2 U 2 } = Pr(Û1 U 1 ) + Pr(Û2 U 2 ) }{{} 2 P b ( Pr ) {Û1 U 1 } {Û2 U 2 }. }{{} ( ) The term ( ) is small compared to the other two terms if the SNR is high. Hence P s 2 P b.

5 Part II Digital Communication Techniques

3 Pulse Amplitude Modulation (PAM) In pulse amplitude modulation (PAM), the information is conveyed by the amplitude of the transmitted waveforms. 51 3.1 BPAM PAM with only two waveforms is called binary PAM (BPAM). 3.1.1 Signaling Waveforms PSfrag replacements Set of two rectangular waveforms with amplitudes A and A over t [, T ], S := { s 1 (t), s 2 (t) } : s 1 (t) s 2 (t) A A A T t A T t As S = 2, one bit is sufficient to address the waveforms (K = 1). Thus, the BPAM transmitter performs the mapping with U x(t) U U = {, 1} and x(t) S. Remark: In the general case, the basic waveform may also have another shape.

52 3.1.2 Decomposition of the Transmitter Gram-Schmidt procedure ψ 1 (t) = 1 E1 s 1 (t) with E 1 = T ( s1 (t) ) T 2 dt = A 2 dt = A 2 T. Thus ψ 1 (t) = { 1 T for t [, T ], otherwise. Both s 1 (t) and s 2 (t) are a linear combination of ψ(t): s 1 (t) = + E s ψ 1 (t) = +A T ψ 1 (t), s 2 (t) = E s ψ 1 (t) = A T ψ 1 (t), where E s = A T and E s = E 1 = E 2. (The two waveforms have the same energy.) Signal constellation Vector representation of the two waveforms: s 1 (t) s 1 = s 1 = + E s PSfrag replacements s 2 (t) s 2 = s 2 = E s Thus the signal constellation is X = { E s, + E s }. Es Es ψ 1 s 2 s 1

53 Vector Encoder The vector encoder is usually defined as { + E s for u =, x = enc(u) = E s for u = 1. The signal constellation comprises only two waveforms, and thus per symbol and per bit, the average transmit energies and the SNRs are the same: E b = E s, γ s = γ b. 3.1.3 ML Decoding ML symbol decision rule dec ML (y) = Sfrag replacements Decision regions ψ 1 argmin y x 2 = x { E s,+ E s } { + E s for y, E s for y <. D 2 = {y R : y < }, D 1 = {y R : y }. D 2 D 1 s 2 = E s s 1 = + E s y Remark: The decision rule dec ML (y) = is also a ML decision rule. { + E s for y >, E s for y.

54 The mapping of the encoder inverse reads { û = enc -1 for ˆx = + E s, (ˆx) = 1 for ˆx = E s. lacements Combining the (modulation) symbol decision rule and the encoder inverse yields the ML decoder for BPAM: { for y, û = 1 for y <. 3.1.4 Vector Representation of the Transceiver BPAM transceiver: lacements BSS U Encoder + E s 1 E s X ψ 1 (t) X(t) W (t) Sink Û Decoder D 1 D 2 1 Y 1 T T (.)dt Y (t) 1 T T (.)dt Vector representation: BSS U Encoder + E s 1 E s X W N (, N /2) Sink Û Decoder D 1 D 2 1 Y

55 3.1.5 Symbol-Error Probability Conditional probability densities p Y X (y + E s ) = 1 πn exp ( 1 N y E s 2) p Y X (y E s ) = 1 πn exp ( 1 N y + E s 2) Conditional symbol-error probability Assume that X = x = + E s is transmitted. Pr( ˆX X X = + E s ) = Pr(Y D 2 X = + E s ) = Pr(Y < X = + E s ) = 1 πn = 1 2π exp ( 1 N y E s 2) dy 2Es /N where the last equality results from substituting z = (y E s )/ N /2. exp ( 1 2 z2) dz, The Q-function is defined as Q(v) := q(z)dz, q(z) = 1 2π exp ( 1 2 z2). v

56 The function q(z) denotes a Gaussian pdf with zero mean and variance 1; i.e., it may be interpreted as the pdf of a random variable Z N (, 1). Using this definition, the conditional symbol-error probabilities may be written as Pr( ˆX X X = + ( 2E ) s E s ) = Q, N Pr( ˆX X X = ( 2E ) s E s ) = Q. N The second equation follows from symmetry. Symbol-error probability The symbol-error probability results from averaging over the conditional symbol-error probabilities: P s = Pr( ˆX X) = Pr(X = + E s ) Pr( ˆX X X = + E s ) + Pr(X = E s ) Pr( ˆX X X = E s ) ( 2E ) s = Q = Q( 2γ s ) N as Pr(X = + E s ) = Pr(X = E s ) = 1/2. Alternative method to derive the conditional symbolerror probability Assume again that X = E s is transmitted. Instead of considering the conditional pdf of Y, we directly use the pdf of the white Gaussian noise W.

57 Pr( ˆX X X = + E s ) = Pr(Y < X = + E s ) = Pr( E s + W < X = + E s ) = Pr(W < E s X = + E s ) = Pr(W < E s ) = Pr(W < 2E s /N ) = Q( 2E s /N ), where we substituted W = W/ N /2; notice that W N (, 1). A plot of the symbol-error probability can found in [1, p. 412]. The initial event (here {Y < }) is transformed into an equivalent event (here {W < 2E s /N }), of which the probability can easily be calculated. This method is frequently applied to determine symbol-error probabilities. 3.1.6 Bit-Error Probability In BPAM, each waveform conveys one source bit. Accordingly, each symbol-error leads to exactly one bit-error, and thus P b = Pr(Û U) = Pr( ˆX X) = P s. Due to the same reason, the transmit energy per symbol is equal to the transmit energy per bit, and so also the SNR per symbol is equal to the SNR per bit: E b = E s, γ s = γ b. Therefore, the bit-error probability (BEP) of BPAM results as P b = Q( 2γ b ). Notice: P b 1 5 is achieved for γ b 9.6 db.

58 3.2 MPAM In the general case of M-ary pulse amplitude modulation (MPAM) with M = 2 K, input blocks of K bits modulate the amplitude of a unique pulse. 3.2.1 Signaling Waveforms Waveform encoder: u = [u 1,..., u K ] x(t) U = {, 1} K S = { s 1 (t),..., s M (t) }, where s m (t) = { (2m M 1) A g(t) for t [, T ], otherwise, A R, and g(t) is a predefined pulse of duration T, g(t) = for t / [, T ]. Values of the factors (2m M 1)A that are weighting g(t): (M 1)A, (M 3)A,..., A, A,..., (M 3)A, (M 1)A. Example: 4PAM with rectangular pulse. Using the rectangular pulse { 1 for t [, T ], g(t) = otherwise, the resulting waveforms are S = { 3Ag(t), Ag(t), Ag(t), 3Ag(t) }.

59 3.2.2 Decomposition of the Transmitter Gram-Schmidt procedure: Orthonormal function ψ(t) = 1 E1 s 1 (t) = 1 Eg g(t), E 1 = T ( s1 (t) ) 2 dt, Eg = T( ) 2dt. g(t) Every waveform in S is a linear combination of (only) ψ(t): s m (t) = (2m M 1) A E }{{} g ψ(t) d = (2m M 1)d ψ(t). Signal constellation s 1 (t) s 1 = (M 1)d s 2 (t) s 2 = (M 3)d s m (t) s m = (2m M 1)d s M 1 (t) s M 1 = (M 3)d s M (t) s M = (M 1)d X = { s 1, s 2,..., s M } = { (M 1)d, (M 3)d,..., d, d,......, (M 3)d, (M 1)d }.

6 Example: M = 2, 4, 8 1 s 1 s 2 1 11 1 s 1 s 2 s 3 s 4 1 11 1 11 111 11 1 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 Vector Encoder The vector encoder enc : u = [u 1,..., u K ] x = enc(u) U = {, 1} K X = { (M 1)d,..., (M 1)d } is any Gray encoding function. Energy and SNR The source bits are generated by a BSS, and so the symbols of the signal constellation X are equiprobable: for all m = 1, 2,..., M. Pr(X = s m ) = 1 M

61 The average symbol energy results as Thus, E s = E [ X 2] = 1 M = d2 M M (s m ) 2 m=1 M (2m M 1) 2 = d2 M m=1 E s = M 2 1 d 2 d = 3 1 3 M(M 2 1) 3E s M 2 1 As M = 2 K, the average energy per transmitted source bit results as E b = E s K = M 2 1 3 log2 M 3 log 2 M d2 d = M 2 1 E b. The SNR per symbol and the SNR per bit are related as γ b = 1 K γ s = 1 log 2 M γ s.

62 PSfrag replacements 3.2.3 ML Decoding ML symbol decision rule dec ML (y) = argmin x X y x 2 Decision regions Example: Decision regions for M = 4. D 1 D 2 D 3 D 4 y s 1 s 2 s 3 s 4 2d 2d Starting from the special cases M = 2, 4, 8, the decision regions can be inferred: D 1 = {y R : y (M 2)d}, D 2 = {y R : (M 2)d < y (M 4)d}, D m = {y R : (2m M 2)d < y (2m M)d}, D M 1 = {y R : (M 4)d < y (M 2)d}, D M = {y R : (M 2)d < y}. Accordingly, the ML symbol decision rule may also be written as dec ML (y) = s m for y D m. ML decoder for MPAM û = enc -1 (dec ML (y))

63 3.2.4 Vector Representation of the Transceiver BPAM transceiver: BSS U Gray Encoder u m s m X ψ(t) X(t) W (t) lacements Sink Û Decoder D m u m Y 1 T T (.)dt Y (t) Vector representation: 1 T T (.)dt U BSS Gray Encoder X u m s m W N (, N /2) Sink Û Decoder D m u m Y Notice: u m denotes the block of source bits corresponding to s m, i.e., s m = enc(u m ), u m = enc -1 (s m ).

64 3.2.5 Symbol-Error Probability Conditional symbol-error probability Two cases must be considered: 1. The transmitted symbol is an edge symbol (a symbol at the edge of the signal constellation): X {s 1, s M }. 2. The transmitted symbol is not an edge symbol: X {s 2, s 3,..., s M 1 }. Case 1: X {s 1, s M } Assume first X = s 1. Then Pr( ˆX X X = s 1 ) = Pr(Y / D 1 X = s 1 ) = Pr(Y > s 1 + d X = s 1 ) = Pr(W > d) ( W = Pr N /2 > ( d ) = Q. N /2 d ) N /2 Notice that W/ N /2 N (, 1). By symmetry, Pr( ˆX ( d ) X X = s M ) = Q. N /2

65 Case 2: X {s 2,..., s M 1 } Assume X = s m. Then Pr( ˆX X X = s m ) = Pr(Y / D m X = s m ) = Pr(W < d or W > d) = Pr(W < d) + Pr(W > d) ( W = Pr N /2 < d ) N /2 ( W + Pr N /2 > d ) N /2 ( d ) ( d ) = Q + Q N /2 ( d = 2 Q N /2 ). N /2 Symbol-error probability Averaging over the symbols, one obtains P s = 1 M Pr( M ˆX X X = s m ) m=1 = 2 ( M Q d ) 2(M 2) ( + N /2 M Q d ) N /2 = 2 M 1 ( M Q d ). N /2 Expressing the symbol-error probability as a function of the SNR per symbol, γ s, or the SNR per bit, γ b, yields P s = 2 M 1 ( 6 ) M Q M 2 1 γ s = 2 M 1 ( 6 M Q log2 M ) M 2 1 γ b.

66 Plots of the symbol-error probabilities can be found in [1, p. 412]. Remark: When comparing BPAM and 4PAM for the same P s, the SNRs per bit, γ b, differ by 1 log 1 (5/2) 4 db. (Proof?) 3.2.6 Bit-Error Probability When Gray encoding is used, we have the relation P b P s log 2 M for high SNR γ b = E b /N. = 2 M 1 M log 2 M Q ( 6 log2 M ) M 2 1 γ b

4 Pulse Position Modulation (PPM) In pulse position modulation (PPM), the information is conveyed by the position of the transmitted waveforms. 67 4.1 BPPM PPM with only two waveforms is called binary PPM (BPPM). PSfrag replacements 4.1.1 Signaling Waveforms Set of two rectangular waveforms with amplitude A and duration T/2 over t [, T ], S := { s 1 (t), s 2 (t) } : s 1 (t) A s 2 (t) A A T 2 T t A T 2 T t As the cardinality of the set is S = 2, one bit is sufficient to address each waveforms, i.e., K = 1. Thus, the BPPM transmitter performs the mapping with U x(t) U U = {, 1} and x(t) S. Remark: In the general case, the basic waveform may also have another shape.

68 4.1.2 Decomposition of the Transmitter Gram-Schmidt procedure The waveforms s 1 (t) and s 2 (t) are orthogonal and have the same energy E s = T s 2 1(t)dt = Notice that E s = A T/2. T s 2 2(t)dt = A 2 T 2. The two orthonormal functions and the corresponding waveform representations are ψ 1 (t) = 1 Es s 1 (t), ψ 2 (t) = 1 Es s 2 (t), s 1 (t) = E s ψ 1 (t) + ψ 2 (t), s 2 (t) = ψ 1 (t) + E s ψ 2 (t). Signal constellation Vector representation of the two waveforms: s 1 (t) s 1 = ( E s, ) T, s 2 (t) s 2 = (, E s ) T. Thus the signal constellation is X = { ( E s, ) T, (, E s ) T}. PSfrag replacements ψ 2 Es s 2 2Es s 1 Es ψ 1

69 Vector Encoder The vector encoder is defined as { ( E s, ) T for u =, x = enc(u) = (, E s ) T for u = 1. The signal constellation comprises only two waveforms, and thus per (modulation) symbol and per (source) bit, the average transmit energies and the SNRs are the same: E b = E s, γ s = γ b. 4.1.3 ML Decoding ML symbol decision rule dec ML (y) = argmin y x 2 = x {s 1,s 2 } Notice: y s 1 2 y s 2 2 y 1 y 2. { ( E s, ) T for y 1 y 2, (, E s ) T for y 1 < y 2. PSfrag Decision replacements regions D 1 = {y R 2 : y 1 y 2 }, D 2 = {y R 2 : y 1 < y 2 }. 2Es y 2 Es D 2 s 2 D 1 s 1 Es y 1

7 Combining the (modulation) symbol decision rule and the encoder inverse yields the ML decoder for BPPM: { û = enc -1 for y 1 y 2, (dec ML (y)) = 1 for y 1 < y 2. PSfrag replacements The ML decoder may be implemented by Y 1 Y 2 Z z z < 1 Û Notice that Z = Y 1 Y 2 is then the decision variable.

71 4.1.4 Vector Representation of the Transceiver BPPM transceiver for the waveform AWGN channel ψ 1 (t) lacements BSS U Vector Encoder ( E s, ) T 1 (, E s ) T X 1 X 2 ψ 2 (t) X(t) W (t) Sink Û Vector Decoder D 1 D 2 1 Y 1 Y 2 2 T/2 (.)dt T 2 T (.)dt T T/2 Y (t) 2 T ψ 1 (t) ψ 2 (t) T/2 (.)dtvector representation including a AWGN vector channel T T/2 (.)dt Vector X 1 U Encoder BSS ( E s, ) T 1 (, E s ) T 2 T X 2 W 1 N (, N /2) Sink Û Vector Decoder D 1 D 2 1 Y 1 W 2 N (, N /2) Y 2

72 4.1.5 Symbol-Error Probability Conditional symbol-error probability Assume that X = s 1 = ( E s, ) T is transmitted. PSfrag replacements Pr( ˆX X X = s 1 ) = Pr(Y D 2 X = s 1 ) = Pr(s 1 + W D 2 X = s 1 ) = Pr(s 1 + W D 2 ) The noise Es vector /2 W may be represented in the coordinate system (ξ 1, ξ 2 ) or in the rotated coordinate system (ξ 1, ξ 2). The latter is more suited to computing the above probability, as only the noise component along the ξ 2-axis is relevant. ξ 2 D 2 ξ 2 s 2 Es w 2 D 1 s 1 + w w 2 w 1 ξ 1 α = π/4 Es s 1 w 1 ξ 1 The elements of the noise vector in the rotated coordinate system have the same statistical properties as in the non-rotated coordinate system (see Appendix B), i.e., W 1 N (, N /2), W 2 N (, N /2).

73 Thus, we obtain Pr( ˆX X X = s 1 ) = Pr(W 2 > E s /2) W 2 = Pr( N /2 > Es ) N = Q( E s /N ) = Q( γ s ). By symmetry, Pr( ˆX X X = s 2 ) = Q( γ s ). Symbol-error probability Averaging over the conditional symbol-error probabilities, we obtain P s = Pr( ˆX X) = Q( γ s ). 4.1.6 Bit-Error Probability For BPPM, E b = E s, γ s = γ b, P b = P s (see above), and so the bit-error probability (BEP) results as P b = Pr(Û U) = Q( γ b ). A plot of the BEP versus γ b may be found in [1, p. 426] Remark Due to their signal constellations, BPPM is called orthogonal signaling and BPAM is called antipodal signaling. Their BEPs are given by P b,bppm = Q( γ b ), P b,bpam = Q( 2γ b ). When comparing the SNR for a certain BEP, BPPM requires 3 db more than BPAM. For a plot, see [1, p. 49]

74 4.2 MPPM In the general case of M-ary pulse position modulation (MPPM) with M = 2 K, input blocks of K bits determine the position of a unique pulse. 4.2.1 Signaling Waveforms Waveform encoder u = [u 1,..., u K ] x(t) U = {, 1} K S = { s 1 (t),..., s M (t) }, where ( (m 1)T ) s m (t) = g t, M m = 1, 2,..., M, and g(t) is a predefined pulse of duration T/M, g(t) = for t / [, T/M]. The waveforms are orthogonal and have the same energy as g(t), E s = E g = T g 2 (t)dt. PSfrag replacements Example: (M=4) PSfrag replacements PSfrag Consider replacements a rectangular pulse g(t) with amplitude A and duration T/4. Then, we have the set of four waveforms, S = placements {s 1 (t), s 2 (t), s 3 (t), s 4 (t)}: s 1 (t) A T s 2 (t) A s 1 (t) t T s 3 (t) s 1 (t) A s 2 (t) t T s 1 (t) s 4 (t) s 2 (t) A s 3 (t) t T t