Error Probability for M Signals

Similar documents
Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding

Lecture 3: Shannon s Theorem

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Composite Hypotheses testing

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Signal space Review on vector space Linear independence Metric space and norm Inner product

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

EGR 544 Communication Theory

Communication with AWGN Interference

Digital Modems. Lecture 2

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Learning Theory: Lecture Notes

6. Stochastic processes (2)

6. Stochastic processes (2)

Expected Value and Variance

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Lecture 4 Hypothesis Testing

Convergence of random processes

An Application of Fuzzy Hypotheses Testing in Radar Detection

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

Limited Dependent Variables

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Differentiating Gaussian Processes

Simulation and Random Number Generation

VQ widely used in coding speech, image, and video

Statistics Chapter 4

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Classification as a Regression Problem

Research Article Green s Theorem for Sign Data

Lecture 3. Ax x i a i. i i

Estimation: Part 2. Chapter GREG estimation

First Year Examination Department of Statistics, University of Florida

NUMERICAL DIFFERENTIATION

Pulse Coded Modulation

Assuming that the transmission delay is negligible, we have

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Credit Card Pricing and Impact of Adverse Selection

Probability and Random Variable Primer

More metrics on cartesian products

Lecture 20: Hypothesis testing

Lecture Notes on Linear Regression

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Numerical Heat and Mass Transfer

Finding Dense Subgraphs in G(n, 1/2)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

Low Complexity Soft-Input Soft-Output Hamming Decoder

CSCE 790S Background Results

Lecture 4: November 17, Part 1 Single Buffer Management

Chapter 13: Multiple Regression

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

Homework Assignment 3 Due in class, Thursday October 15

Lecture 2: Prelude to the big shrink

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

EEE 241: Linear Systems

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

Economics 130. Lecture 4 Simple Linear Regression Continued

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Problem Set 9 Solutions

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

Introduction to Information Theory, Data Compression,

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

RELIABILITY ASSESSMENT

ECE559VV Project Report

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Global Sensitivity. Tuesday 20 th February, 2018

Probability Theory (revisited)

One-sided finite-difference approximations suitable for use with Richardson extrapolation

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

Supplement to Clustering with Statistical Error Control

Lecture 3: Probability Distributions

Chapter 8 SCALAR QUANTIZATION

Refined Coding Bounds for Network Error Correction

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Complete subgraphs in multipartite graphs

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

Transcription:

Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal functons. Wrtng down an expresson for the error probablty n terms of an -dmensonal ntegral s straghtforward. However, evaluatng the ntegrals nvolved n the expresson n all but a few specal cases s very dffcult or mpossble f s farly large (e.g. 4). For the specal case of orthogonal sgnals n the last chapter we derved the error probablty as a sngle ntegral. Because of the dffculty of evaluatng the error probablty n general bounds are needed to determne the performance. Dfferent bounds have dfferent complexty of evaluaton. Ths frst bound we derve s known as the Gallager bound. We apply ths bound to the case of orthogonal sgnals (for whch the true answer s already known). The Gallager bound has the property that when the number of sgnals become large the bound becomes tght. However, the bound s farly dffcult to evaluate for many sgnal sets. A specal case of the Gallager bound s the Unon-Bhattacharayya bound. Ths s smpler than the Gallager bound to evaluate but also s looser than the Gallager bound. The last bound consdered s the unon bound. Ths bound s tghter than the Unon- Bhattacharayya bound and the Gallager bound for suffcently hgh sgnal-to-nose ratos. Fnally we consder a smple random codng bound on the ensemble of all sgnal sets usng the Unon-Bhattacharayya bound. The general sgnal set we consder has the form s t s j jϕ j t M The optmum recever does a correlaton wth the orthonormal waveforms to form the decson varables. r j T r t ϕ j t dt j The decson regons are for equally lkely sgnals gven by R r : r p j r j The error probablty s then determned by M P j j R jh For all but a few small dmensonal sgnals or sgnals wth specal structures (such as orthogonal sgnal sets) the exact error probablty s very dffcult to calculate. 3-

3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS ϕ t ϕ t r t ϕ t dt r r t r t ϕ t dt r Fnd s wth smallest ϕ t r s r t ϕ t dt r Fgure 3.: Optmum Recever n Addtve Whte Gaussan ose. rror Probablty for Orthogonal Sgnals Represent the M sgnals n terms of M orthogonal functon ϕ t as follows s t ϕ t s t ϕ t s M t ϕ M t As shown n the prevous chapter we need to fnd the largest value of r t s j t for j M. Instead we wll normalze ths and determne the largest value of r j r t s j t. To determne the error probablty we need to determne the statstcs of r j. Assume sgnal s s transmtted. Then T r j r t ϕ j t dt r jh T T T T r t ϕ j t dth r t H ϕ j t dt s t δ j n t ϕ j t dt ϕ t ϕ j t dt

. 3-3 The varance of r j s determned as follows. Gven H r j r j H r j r j H H T T T T T T T T n t ϕ j t dt ϕ j n t n s ϕ j t ϕ j s dtds K t s ϕ j t ϕ j s dtds δ t s ϕ j t ϕ j s dtds t ϕ j t dt Furthermore, each of these random varables s Gaussan (and ndependent). x Φ x P error P correct M P H H π P correct P c P H H P r r j j H P r r j j H r M j j P r r jh r M j j Φ r π exp r M Φ r dr π e u du. ow let u r. Then P c exp u π M Φ u du exp u π M Φ u du M Φ u π M Φ u e u du where the last step follows from usng an ntegraton by parts approach. Later on we wll fnd an upper bound on the above that s more nsghtful. It s possble to determne (usng L Hosptal s rule) the lmtng behavor of the error probablty as M. In general f we have M decson varables for an M-ary hypothess testng problem that are condtonally ndependent gven the true hypothess and there s a densty (dstrbuton) of the decson varable for the true statstc denoted f x (F x ) and a densty and dstrbuton functon for the other decson varables ( f x F x then the probablty of correct s P c f x M F x dx

3-4 CHAPTR 3. RROR PROBABILITY FOR M SIGALS The probablty of error s P e M f x F M F x F M x dx x f x dx The last formula s many tmes easer to compute numercally than the frst because the former s the dfference between two numbers that are very close (for small error probabltes).. Gallager Bound In ths secton we derve an upper bound on the error probablty for M sgnals receved n some form of nose. Let R R P c r : r p j r j r : r p j r for some j P H H P H H P R H ow R For λ let r : p j r r r : r r R Clam: Then R R. Proof: If r R then p j mples that and thus r R. Thus we have shown that R for some j p j r r λ for some j p j r r λ. Thus for some j, p j r r R. ow we use ths to upper bound the error probablty. λ whch where For r For r R and we have R and we have P R H P R H R M I I R R r dr p j r r p j r r r r λ λ R R R r dr

. 3-5 Thus I R j p j r r λ Applyng ths bound to the expresson for the error probablty we obtan R M R M p j r j p r r λ λ r dr p j j r λ dr for and λ. If we let λ (ths s the value that mnmzes the bound, see Gallager problem 5.6) the resultng bound s known as the Gallager bound. R M r p j j r If we let we obtan what s known as the Bhattacharayya bound. The average error probablty s then wrtten as R M R M P e r p j j r r p j r dr M π P e dr. xample of Gallager bound for M-ary orthogonal sgnals n AWG. r π e j r π e π e M j M k M j r r j k j π e j r π e r k π e r j dr π e r j π e r j π e r k r e e r e j e e r exp r exp j j dr dr dr

3-6 CHAPTR 3. RROR PROBABILITY FOR M SIGALS Let z g z exp where z r. Then M z e j j e g z ow t s easy to show (by completng the square) that g z π e g z j g z j exp e z exp π j g z j z dz Let f x x where. Then f x s a concave functon and thus by Jensen s nequalty we have that f X f X Thus Thus j g z j j g z j j g z j M g z j P e M e g z M exp exp lnm ow we would lke to mnmze the bound over the parameter keepng n mnd that the bound s only vald for. Let a and b lnm and Then f a f b a b Snce the mnmum occurs at an nteror pont of the nterval f n whch case the bound becomes exp lnm 4 lnm dz

3. 3-7 - -4-6 M=8-8 Pes - - -4-6 M=8 M=64-8 - -5 5 5 b/ (db) Fgure 3.: Comparson of Gallager Bound and exact error probablty for orthogonal sgnals. In each group the upper curve s the bound and the lower curve s the exact error probablty If lnm 4 then mn n whch case the upper bound becomes P e exp lnm. If lnm then mn n whch case the upper bound becomes sgnals n whte Gaussan nose s exp In summary the Gallager bound for M orthogonal lnm lnm lnm 4lnM exp lnm 4lnM ormally a communcaton engneer s more concerned wth the energy transmtted per bt rather than the energy transmtted per sgnal,. If we let b be the energy transmtted per bt then these are related as follows b log M Thus the bound on the error probablty can be expressed n terms of the energy transmtted per bt as exp log M exp log M b b ln where exp x denotes x. ote that as M, P e f b b ln ln b probablty and the Gallager bound for M orthogonal sgnals for M 8 64 5. ln b 4ln 4ln ln = -.59dB. Below we plot the exact error 3. Bt error probablty So far we have examned the symbol error probablty for orthogonal sgnals. Usually the number of such sgnals s a power of, e.g. 4, 8, 6, 3,... If so then each transmsson of a sgnal s carryng log M bts of nformaton.

3-8 CHAPTR 3. RROR PROBABILITY FOR M SIGALS In ths case a communcaton engneer s usually nterested n the bt error probablty as opposed to the symbol error probablty. Let d s s j be the (ucldean) dstance between s and s j,.e s t s j t dt d s s j l s l s j l ow consder any sgnal set for whch the dstance between every par of sgnals s the same. Orthogonal sgnal sets wth equal energy satsfy ths condton. Let k log M. If s s transmtted there are M other sgnals to whch an error can be made. The number of sgnals whch cause an error of bts out of the k s k. Snce all sgnals are the same dstance from s the condtonal probablty of a symbol error causng bts to be n error s k M So the average number of bt error gven a symbol error s k So the probablty of bt error gven symbol error s So k k M k M k k k M k k P b k k and ths s true for any equdstant, equenergy sgnal set. 4. Unon Bound Assume Let Then π M M R r : r p j r for all j R r : r p j r for some j M r : r p j r j R j r : r p j r P r R H P r R j H j P R j H where P R j H P r p j r H

4. 3-9 Ths s the unon bound. We now consder the bound for an arbtrary sgnal set n addtve whte Gaussan nose. Let For addtve whte Gaussan nose r p j r s t r s l l ϕ l t M l l exp exp r l s l π exp r l s l r l s jl r s s j j where r s s j l r l s l s jl and k l s kl for k M Thus P R j H P r s s j j H To do ths calculaton we need to calculate the statstcs of the random varable r s s j. The mean and varance are calculated as follows. Also r s s j s a Gaussan random varable. Thus r s s j H n s s s j s s j Var r s s j H Var n s s s j s s j P r s s j j Thus the unon bound on the error probablty s gven as Φ j s s j s s j Q s s j j s s j s s j Q j s j s Q s s j,.e. the square of the ucldean dstance. ote that s s j d We now use the followng to derve the Unon-Bhattacharyya bound. Ths s an alternate way of obtanng ths bound. We could have started wth the Unon-Bhattacharyya bound derved from the Gallager bound, but we would get the same answer.

3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS Fact: Q x e x e x x. (To prove ths let X and X be ndependent Gaussan random varables mean varance. Then show Q x P X Raylegh densty; see page 9 of Proaks) Usng ths fact leads to the bound x X x 4 P X X j exp s s j 4 Ths s the Unon Bhattacharyya bound for an addtve whte Gaussan nose channel. 5. Random Codng ow consder M communcaton systems correspondng to all possble sgnals where s j s M x. Use the fact the X X has Consder the average error probablty, averaged over all possble selectons of sgnal sets For example: Let 3 M. There are 3 6 64 possble sets of sgnals wth each sgnal a lnear combnaton of three orthogonal sgnals wth the coeffcents requred to be one of two values. Set number s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t Set number s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t.... Set number 64 s t ϕ t ϕ t ϕ 3 t s t ϕ t ϕ t ϕ 3 t Let P e k be the error probablty of sgnal set k gven H. Then P e M M k P e k If α then at least one of the of the M sgnals sets must have P e k α (otherwse P e k α for all k α; contradcton). In other words there exsts a sgnal set wth α. Ths s known as the random codng argument. Let s j M j be ndependent dentcally dstrbuted random varables wth P s j P s j and P e s where the expectaton s wth respect to the random varables s. Let X j s s j l s l s jl. Then snce P s l s jl P s l s jl. s exp 4 s s j P X j 4m P s and s j dffer n m places out of m

5. 3- So 4 exp s s j X e j 4 Let R log where R log M X e j 4 e then m m e m4 4 m m e m e log e exp R R M R M R s the number of bts transmtted per dmenson and s the sgnal energy per dmenson. We have shown that there exst a sgnal set for whch the average value of the error probablty for the th sgnal s small. Thus we have shown that as goes to the error probablty gven s was transmtted goes to zero f the rate s less than the cutoff rate R. Ths however does not mply that there exst a code s s M such that P e P e M are smultaneously small. It s possble that s small for some code for whch P e j s large. We now show that we can smultaneously make each of the error probabltes small smultaneously. Frst chose a code wth M R codewords for whch the average error probablty s less than say ε for large. If more than R of these codewords has ε then the average error probablty would be greater than ε, a contradcton. Thus at least M R of the codewords must have ε. So delete the codewords that have ε (less than half). We obtan a code wth (at least) R codewords wth as n for R R. Thus we have proved the followng. Theorem: There exst a sgnal set wth M sgnals n dmensons wth P e R R P e as provded R R. ote: s the energy per dmenson. ach sgnal then has energy and s transmttng log M bts of nformaton so that b log M R s the energy per bt of nformaton.,.e. From the theorem, relable communcaton (P e ) s possble provded R R For log exp b R R R log e b R R e b R e b R R ln R R b R ln R b R ln R R R ln P e f b ln ote: M orthogonal sgnals have P e f b ln. The rate of orthogonal sgnals s R log M log M M as M The theorem guarantees exstence of sgnals wth log M R and P e as M.

3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS b / (db) 9 8 7 6 5 4 Achevable Regon 3...3.4.5.6.7.8.9 Code Rate Fgure 3.3: Cutoff Rate for Bnary Input-Contnuous Output Channel P e 5 n=5 n= n= n= n= n=5 5 3 4 5 6 7 8 9 b / (db) Fgure 3.4: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate / codes

5. 3-3 P e 5 n=5 n= n= n= n=5 5.5.5 3 3.5 4 4.5 5 / (db) b Fgure 3.5: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate /8 codes P e n= 5 n= n=5 n= n= 5 3 4 5 6 7 8 / (db) b Fgure 3.6: rror Probabltes Based on Cutoff Rate for Bnary Input-Contnuous Output Channel for Rate 3/4 codes

3-4 CHAPTR 3. RROR PROBABILITY FOR M SIGALS. xample of Gallager bound for M-ary sgnals n AWG. In ths secton we evaluate the Gallager bound for an arbtrary sgnal set n addtve whte Gaussan nose channel. As usual assume the sgnal set transmtted has the form s t µ j jφ j t M The optmal recever does a correlaton wth each of the orthonormal functons to produce the decson statstc r r. The condtonal densty functon of r gven sgnal s t transmtted s gven by r k π e r k µ k If we substtute ths nto the general form of the Gallager bound we obtan j j k k k k π e π e k k exp µ exp r k µ k r k µ j k π e r e k r k µ j k µ j k π exp exp µ r k r k µ k µ k dr dr rk r k µ k µ k r k µ j k µ j k exp µ j k k dr r k π exp µ r k k r exp k µ j k dr k exp µ j π exp µ r k k r exp k µ j k dr µ µ µ j exp µ exp µ j exp d exp µ µ j µ j When the sgnals are all orthogonal to each other then d for j and µ j and the bound becomes P e exp j

6. 3-5 Unon-Bhatttacharyya Bound Pec - Unon Bound - -3 - -8-6 -4-4 6 8 b/ (db) Fgure 3.7: Comparson of Gallager Bound, Unon Bound and Unon Bhattacharyya Bound for the Hammng Code wth BPSK Modulaton M exp Ths s dentcal to the prevous expresson. ow we consder a couple of dfferent sgnal sets. The frst sgnal set has 6 sgnals n seven dmensons. The energy n each dmenson s so the total energy transmtted s 7. The energy transmtted per nformaton bt s b 7 4 The geometry of the sgnal set s such that for any sgnal there are seven other sgnals at ucldean dstance, seven other sgnals at ucldean dstance 6 and one other sgnal at dstance 8. All sgnals have energy 7. (Ths s called the Hammng code). The fact that the sgnal set s geometrcally unform s due to the lnearty of the code. We plot the Gallager bound for. The Unon-Bhattacharyya bound s the Gallger bound wth. The second sgnal set has 56 sgnals n 6 dmensons wth sgnals at dstance 4, 3 sgnals at dstance 3, sgnals at dstance 4 and sgnal at dstance 64. In ths case b. As can be seen from the fgures the unon bound s the tghtest bound except at very low sgnal-to-nose ratos where the Gallager bound stays below. At reasonable sgnal-to-nose ratos the optmum n the Gallager bound s and thus t reduces to the Unon-Bhattacharyya bound. 6. Problems M. Usng L Hosptals rule on the log of Φ show that f b log M then lm Φ M x b ln M b ln (3.)

3-6 CHAPTR 3. RROR PROBABILITY FOR M SIGALS Unon-Bhattacharyya Bound Pec - Unon Bound - -3 - -8-6 -4-4 6 8 b/ (db) Fgure 3.8: Comparson of Gallager Bound, Unon Bound and Unon Bhattacharyya Bound for the ordstrom- Robnson code wth BPSK Modulaton

6. 3-7 and consequently lm M M orthogonal sgnals. P e s f b ln and f b ln where P e s the error probablty for. (a) Show that for any equenergy, equdstant (real) sgnal set, s s j a constant for equenergy mples s (b) For any equenergy sgnal set show that where s a constant and equdstant mples s s j s a constant). M M j j s s j s s j s s j ave M M j M j. (ote: 3. (R codng theorem for dscrete memoryless channels) Consder the dscrete memoryless channel (DMC) whch has nput alphabet A a a a A (wth A beng the number of letters n A and s fnte) and output alphabet B b b b B (wth B beng the number of letters n B and s fnte). As n the Gaussan nose channel the channel s characterzed by a set of transton probabltes p b a a A b B such that f we transmt a sgnal s t where wth s l s t A then the receved sgnal has the form wth p r l b s l a p b a for a r t A, b B and s l lφ l t r l l φ l t p r r s s l p r l s l ow we come to the R codng theorem for a dscrete (fnte alphabet) memoryless (equaton () s satsfed) channel (DMC). Prove there exst M sgnals n dmensons such that where P e M M R R log R M R log J J mn J a a b B p x J X X p b a p b a and n () X and X are ndependent, dentcally dstrbuted random varables wth common dstrbuton p x, Step : Let M and let s s s

3-8 CHAPTR 3. RROR PROBABILITY FOR M SIGALS and The decoder wll not decde s f Let and s s s p r s p r s R r : p r s R r : p r s (ote that R and R may not be dsjont). Show that p r s p r s p r s j r R j p r s p r s all r l p r ls l p r ls l r l B l J s l s l Step : Apply the unon bound to obtan for M sgnals (codewords) l J s l s j l Step 3: Average over all possble sgnal sets where the sgnals are chosen ndependently accordng to the dstrbuton that acheves J (.e. that dstrbuton p x on A such that J J X X (treat s j : M j as..d. random varables wth dstrbuton p x ) to show that P e P e R M Step 4: Complete the proof. 4. Show for a bnary symmetrc channel defned as that A B p b a p p R log p p a b a b 5. A set of 6 sgnals s constructed n 7 dmenson usng only two possble coeffcents,.e. s j. Let A k j : s s j 4k.e. A k s the number of sgnal pars wth squared dstance 4k. The sgnals are chosen so that A k 6 k k k 3 4 k 5 6 6 k 7 Fnd the unon bound and the unon-bhattacharyya bound on the error probablty of the optmum recever n addtve whte Gaussan nose wth two sded power spectral densty.

6. 3-9 6. A modulator uses two orthonormal sgnals (φ t and φ t ) to transmt 3 bts of nformaton (8 possble equally lkely sgnals) over an addtve whte Gaussan nose channel wth power spectral densty. The sgnals are gven as s t φ t y 3 φ t s t φ t y 3 φ t s 3 t φ t y φ t s 4 t φ t yφ t s 5 t φ t y φ t s 6 t φ t y 3 φ t s 7 t φ t y 3 φ t s 8 t φ t y 3 φ t (a) Determne the optmum value of the parameter y to mnmze the average sgnal power transmtted. Draw the optmum decson regons for the sgnal set (n two dmensons). (b) Determne the unon bound to the average error probablty n terms of energy per nformaton bt b to nose densty rato. (c) Can you tghten ths bound? ϕ t 3 y 3 ϕ t 3 7. Consder an addtve (nonwhte) Gaussan nose channel wth one of two sgnals transmtted. Assume the nose has covarance functon K s t. Usng the Bhattacharray bound show that the error probablty when transmttng one of two sgnals (s t or s t ) can be bounded by K P e e s s 8

3- CHAPTR 3. RROR PROBABILITY FOR M SIGALS If the nose s whte, what does the bound become? 8. Consder a Posson channel wth one of 4 sgnals transmtted. Let the sgnals be as shown below. Assume when the sgnal s present that the ntensty of the photon process s λ and when the sgnal s not present the ntensty s λ. That s the receved sgnal durng the nterval T s Posson wth parameter λ f the laser s on and λ f the laser s off. Fnd the optmal recever for mnmzng the probablty of error for a sgnal (as opposed to a bt). Fnd an upper bound on the error probablty. s t λ s t 4T t λ s 3 t T 5T t λ s 4 t T 6T t λ 3T 7T t 9. A sgnal set conssts of 56 sgnals n 6 dmensons wth the coeffcents beng ether or. The dstance structure s gven as A k j : s s j 4k 56 k 867 k 6 768 k 8 867 k 56 k 6 otherwse These sgnals are transmtted wth equal probablty over an addtve whte Gaussan nose channel. De-

6. 3- termne the unon bound on the error probablty. Determne the unon-bhattacharyya bound on the error probablty. xpress your answer n terms of the energy transmtted per bt. What s the rate of the code n bts/dmenson?. A communcaton system uses a dmensons and a code rate of R bts/dmenson. The goal s not low error probablty but hgh throughput (expected number of successfully receved nformaton bts per coded bt n a block on length ). If we use a low code rate then we have hgh success probablty for a packet but few nformaton bts. If we use a hgh code rate then we have a low success probablty but a larger number of bts transmtted. Assume the channel s an addtve whte Gaussan nose channel and the nput s restrcted to bnary modulaton (each coeffcent n the orthonormal expanson s ether or. Assume as well that the error probablty s related to the block length, energy per dmenson and code rate va the cutoff rate theorem (soft decsons). Fnd (and plot) the throughput for code rates varyng from. to.9 n steps of. as a functon of the energy per nformaton bt. (Use Matlab to plot the throughput). Assume 5. Be sure to normalze the energy per coded bt to the energy per nformaton bt. Compare the throughput of hard decson decodng (BPSK and AWG) and soft decson decodng.