Assuming that the transmission delay is negligible, we have

Similar documents
Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Communication with AWGN Interference

Chapter 7 Channel Capacity and Coding

Chapter 7 Channel Capacity and Coding

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Signal space Review on vector space Linear independence Metric space and norm Inner product

Digital Modems. Lecture 2

Error Probability for M Signals

Composite Hypotheses testing

Lecture 3: Shannon s Theorem

Lecture Notes on Linear Regression

Multi-dimensional Central Limit Argument

Linear Approximation with Regularization and Moving Least Squares

Differentiating Gaussian Processes

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

APPENDIX A Some Linear Algebra

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Multi-dimensional Central Limit Theorem

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

Classification as a Regression Problem

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Estimation: Part 2. Chapter GREG estimation

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

Feb 14: Spatial analysis of data fields

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

CSCE 790S Background Results

More metrics on cartesian products

Homework Notes Week 7

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Ph 219a/CS 219a. Exercises Due: Wednesday 23 October 2013

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Lecture 12: Classification

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

Complex Numbers, Signals, and Circuits

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

The Geometry of Logit and Probit

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

EGR 544 Communication Theory

What would be a reasonable choice of the quantization step Δ?

PHYS 705: Classical Mechanics. Calculus of Variations II

Interval Estimation in the Classical Normal Linear Regression Model. 1. Introduction

Learning Theory: Lecture Notes

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Probability and Random Variable Primer

An Application of Fuzzy Hypotheses Testing in Radar Detection

First Year Examination Department of Statistics, University of Florida

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Pulse Coded Modulation

VQ widely used in coding speech, image, and video

TLCOM 612 Advanced Telecommunications Engineering II

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

The exam is closed book, closed notes except your one-page cheat sheet.

The Second Anti-Mathima on Game Theory

Chapter 8 Indicator Variables

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Lecture 3: Probability Distributions

Chapter 3 Describing Data Using Numerical Measures

Strong Markov property: Same assertion holds for stopping times τ.

Economics 130. Lecture 4 Simple Linear Regression Continued

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

A random variable is a function which associates a real number to each element of the sample space

STATISTICAL MECHANICS

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Maximum Likelihood Estimation (MLE)

Analysis of Discrete Time Queues (Section 4.6)

Eigenvalues of Random Graphs

Statistical pattern recognition

Chapter Newton s Method

Week 9 Chapter 10 Section 1-5

I + HH H N 0 M T H = UΣV H = [U 1 U 2 ] 0 0 E S. X if X 0 0 if X < 0 (X) + = = M T 1 + N 0. r p + 1

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

The Feynman path integral

Which Separator? Spring 1

Introduction to Random Variables

Problem Points Score Total 100

Lecture 10 Support Vector Machines II

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

x i1 =1 for all i (the constant ).

ECE559VV Project Report

10.40 Appendix Connection to Thermodynamics and Derivation of Boltzmann Distribution

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Transcription:

Baseband Transmsson of Bnary Sgnals Let g(t), =,, be a sgnal transmtted over an AWG channel. Consder the followng recever g (t) + + Σ x(t) LTI flter h(t) y(t) t = nt y(nt) threshold comparator Decson ˆ b n W(t) Assumng that the transmsson delay s neglgble, we have x(t) = g(t) + W(t), (n-)t t nt, n =,,... W(t) s a zero-mean whte Gaussan nose process wth psd, f 46

At the output of the LTI flter, yt ( ) = x( τ) ht ( τ) dτ = [ g( τ) + W( τ)] ht ( τ) dτ yt ( ) = g( τ) ht ( τ) dτ + W( τ) ht ( τ) dτ. At the samplng nstant t = nt, y ( nt ) = g ( τ ) h( nt τ ) dτ + W ( τ ) h( nt τ ) dτ. Let g(t) be the transmtted pulse when a logcal s sent and let g(t) be the transmtted pulse when a logcal s sent. Then a possble decson strategy s: If y(nt) > A, then g(t) was transmtted ( sent) If y(nt) A, then g(t) was transmtted ( sent) 47

Snce ht () s LTI, the observaton y( nt ), gven the nowledge of () random varable. Hence, to establsh the decson crteron we only need to compute the condtonal mean and varance of y( nt ) gven g () t. g t s a Gaussan The condtonal expected value (mean) of the flter output gven that g () t was sent s gven by E{ y( nt ) g() t } = E g( τ) h( nt τ) dτ g() t + E W ( τ) h( nt τ) dτ g() t { ( ) ( ) () } { ( ) ( ) () } = E g τ h nt τ g t dτ + E W τ h nt τ g t dτ { } = g ( τ) h( nt τ) dτ + E W ( τ) = g ( τ) h( nt τ) dτ G, =, h ( nt τ ) dτ 48

Moreover, the condtonal varance s { } ( ) = { } Var y( nt ) g () t E y( nt ) E y( nt ) g () t g () t = E W ( τ) h( nt τ) dτ g ( t) = E W ( τ) h( nt τ) dτ * * = E W() τw ( λ)( h nt τ) h ( nt λ) dτdλ { } * * = E W ( τ) W ( λ) h( nt τ) h ( nt λ) dτdλ = * δτ ( λ) h( nt τ) h ( nt λ) dτdλ * = δ( τ λ) h( nt τ) dτ h ( nt λ) dλ = h( nt λ) dλ h( λ) dλ = = H ( f ) df σ = 49

Therefore, f ( y( nt ) "" sent) = e πσ f y nt sent = e πσ σ ( y G ) σ ( y G ) ( ( ) "" ). Therefore, an error wll occur f ether a) choose when was sent, or b) choose when was sent. Mathematcally, { choose"" ""was sent } = { ( ) ""was sent} P P y nt A and A = f ( y( nt ) ""was sent) dy { choose"" ""was sent } = { ( ) > ""was sent} P P y nt A = f ( y( nt ) ""was sent) dy. A 5

So, the average probablty of a bt error (BER) s gven by P { } { bt error} = P { bt error and ""was sent} { bt error and ""was sent} = P{ bt error and ""was sent} + P{ bt error and ""was sent} = P{ choose""and ""was sent} + P{ choose""and ""was sent} = P choose"" ""was sent P{ ""was sent} + P { } { choose"" ""was sent } P{ ""was sent} A = p f ( y( nt ) ""was sent) dy + ( p) f ( y( nt ) ""was sent) dy A where p = P{ ""was sent} and p = P{ } ""was sent. The decson regons are separated by the threshold A. See the next fgure, where the threshold A s assumed to be because the bts occur wth equal probablty,.e. p =.5 (we wll dscuss ths ssue next). 5

.4.35.3.5..5..5-6 -4-4 6 y f ( y ""was sent) f ( y ""was sent) 5

The queston now s: How do we choose A optmally? To fnd the optmal value of A, we mnmze the probablty of error wth respect to t,.e., d da or A d P{ bt error } = p f ( y( nt ) ""was sent) dy ( p) f ( y( nt ) ""was sent) dy da + A = p f( A""sent) + ( p) f( A""sent) = d P A G { } pe σ p e σ = + = da πσ ( A G ) ( ) bt error [ ( ) ] ( A G ) ( A G ) ( A G ) σ σ e σ p e = = ( A G ) p e p or ln ( A G ) ( A G) p = σ or p ln A AG G A AG G G G A( G G) p = σ + + = σ 53

The optmal value of A can now be found by solvng the followng equaton for A: σ ( G G )( G + G ) p G G ln AG ( G) AG ( G) p = =. amely, A opt G + G σ ln p = + p G G If both symbols are transmtted wth equal probablty,.e. G+ G p = then Aopt =, namely, the arthmetc average of the means at the output of h(t) at t = nt. When the two symbols are not transmtted wth equal probablty, the optmal threshold Aopt wll shft to the rght or to the left, dependng on whch symbol occurs wth hgher probablty. 54

In the case of equal probablty of occurrence, A opt P{ bt error } = ( ( ) ""sent) ( ( ) ""sent) f y nt dy + f y nt dy Aopt G+ G ( y G) σ ( y G) σ = e dy e dy πσ + G+ G = σ e du + σ e dv, u = πσ u v G G G G σ σ σ u G G = e du = Q π G G σ σ, y G, v = y G σ where u ( ), Q x = e du x π x mean and unt varance,.e. the area under the tal of, s the area under the tal of the Gaussan pdf wth zero x fx ( x) = e, x. π 55

It should be clear from the prevous dervaton that both G and G depend on h(t), the G G G G mpulse response of the recever flter. Also, Q decreases as σ σ ncreases,.e., the average probablty of error, P{bt error}, decreases as the separaton between G and G ncreases. Let s now fnd the h(t) that wll result n the mnmum probablty of bt error. To do ths, consder the followng optmzaton problem: Maxmze over all possble h(t) the square of the argument of the Q functon,.e. G G max ht () σ. 56

G G G G max = max ht () σ ht () σ h( τ) g( nt τ) dτ h( τ) g( nt τ) dτ = max ht () H ( f ) df h( τ) g ( nt τ) g( nt τ) d τ max = ht () H ( f ) df ( h() t [ g ]) () t g() t t nt = = max. ht () H ( f ) df 57

But, jπ ft () () () ( ) ( ) ( ), ( [ ]) = [ ] h t g t g t H f G f G f e df t = nt Usng the Schwarz nequalty, we get jπ ft ( [ ]) [ ] h() t g () t g () t G ( f ) G ( f ) e df H ( f ) df, t = nt, Equalty occurs whenever For arbtrary, we get = G ( f ) G ( f ) df H ( f ) df, t = nt jπ f nt ([ ] ) * jπ f nt [ ( ) ( )]. H( f) = G ( f) G ( f) e = G f G f e * G G max = G( f ) G( f ) df. ht () σ 58

For =, jπ fnt jπ ft hopt ( t) = G ( f ) G ( f ) e e df = j π f ( nt t) G ( f ) G ( f ) e df = [ ( ) ( )] j π f ( nt t) G f G f e df ( g ( nt t) g ( nt t) ) = g ( nt t) g ( nt t) * * = = g ( nt t) g ( nt t), for real g ( t), g ( t). G G Assumng p =, P{ error} s mnmum when s maxmum or when σ h( t) = g ( nt t) g ( nt t),.e. ht () matches the nput pulses g( t) and g( t ). 59

By Parseval s theorem, G G max = G( f ) G( f ) df [ g() t g() t ] dt, ht () σ = whch mples that / / G G = [ g() t g() t ] dt G( f ) G( f ) df σ = max and = [ g () t g () t ] dt G ( f ) G ( f ) df = [ ] g () t g() t dt G( f ) G( f ) df P{ error} = Q Q mn =. 6

Example: Compute the mnmum error probablty (BER) for the followng on-off eyng (transmt a pulse when a logcal occurs and transmt nothng when a logcal occurs): A T/ T T 3T 4T t Here, g() t T t A tr = T and g ( t) =, t T. 6

ow, t T g() t g() t dt = Atr dt T [ ] T T t T = A tr dt T = + ( ) T T T T T A t dt t T dt = A t dt = t = T 4 8A 3 T T 3T AT 3 and the mnmum bt error rate (BER) s equal to P{ error occurs} mn AT = Q. 6 6

Assumng the bt sequence s random, then the average bt energy s gven by AT AT Eb, av = E"" + E"" = + = 3 6 as BER mn E b, Q av = and the BER can be rewrtten. The followng plot shows the performance of the prevous communcaton system n AWG (ths s the same as on-off eyng or OO). - BER - -3-4 3 4 5 6 7 8 9 SR n db BER system performance 63

Baseband sgnal-space Analyss Our goal now s to formulate the dfferent detecton strateges n a more ntutve fashon. We do ths by gvng the modulated sgnals a geometrc nterpretaton. φ () t Let S be a -dmensonal sgnal space and { } further that the bass functons are orthonormal,.e. = be a bass for ths space. Suppose, = j φ() t φj () t dt =,, j t t + T s φ () t where Ts s a tme nterval yet to be determned, then { } = s an orthonormal set. Let st () sφ () t,.e., st () S. j= j j 64

ow, for =,, s() t () t dt s () t () t dt + + φ = jφj φ t t j= t Ts t Ts j= t + T = s φ ( t) φ ( t) dt = s, =,...,. s j j t Furthermore, the energy of s(t) s gven by t+ Ts t+ Ts t+ Ts E = s() t dt = s() t s () t dt = sφ() t s jφj() t dt t t t = j= t+ Ts t+ Ts * j φ() φj () j φ() φj () t = j= = j= t = s s t t dt = s s t t dt ss s = = = =. 65

Let the coeffcents s, =,, be expressed as a vector [ ] s T E= ss = [ s s] = s = s Let { s (), t s () t } M s s s s T,, then,.e., E s the nner (dot) product of s wth tself. be a set of sgnals we want to use n a communcaton system. If ths set s defned on the nterval ( t t T ), s then an orthonormal bass can be constructed as follows: +, where T s s the maxmum sgnal duraton,. Let g() t = s() t and φ() t = g() t g() t = g() t E s() t = Eφ(), t where + s + s t T t T g() t g() t g () t dt = g() t dt = E. t t 66

. Let g() t = s() t s(), t φ() t φ() t and φ () t = g() t g(), t where t + T s ut (), vt () utv () () tdt. t g3() t 3. Let g3() t = s3() t s3(), t φ() t φ() t s3(), t φ() t φ(), t φ3() t =. g () t 3. Let g () t g () t = s () t s (), t () t (), t =, g () t and M. φ φ φ = g () t ote that s (), t φ () t can be nterpreted as the projecton of s(t) onto φ () t. The set of bass functon { } φ = () t forms an orthonormal set. The procedure outlned above s nown as the Gram-Schmdt orthogonalzaton procedure. 67

s () t M Remar : The set of sgnals { } = s a lnearly ndependent set ff = M. Remar : The sgnals s (), t s () t are not lnearly ndependent f < M and g(t) =, < M. Example: Consder the sgnals s(t), =,, 3, 4 ( M = 4 M ) descrbed by s (t) s (t) s 3 (t) s 4 (t) 3 4 t 3 4 t 3 4 t 3 4 t In ths case T s = 3 sgnals. seconds. Let us now construct an orthonormal bass for ths set of g () t = s () t. 3 g () t = g (), t g () t = g () t dt = dt = = E g () t g () t φ () t = = = g () t = s () t s () t = φ () t g() t 68

. = φ g () t s () t s (), t () t φ () t 3 = = = = s (), t φ () t s () t φ () t dt dt g () t s () t φ () t g () t = s () t s (). t ow, or 3 3 g () t = g () t dt = [ s () t s () t ] dt = dt = = E g () t s () t φ () t φ () t = = s () t = φ () t + φ () t g() t 3. 3 = 3 3 φ φ 3 φ φ g () t s () t s (), t () t () t s (), t () t () t 69

3 3 3 φ 3 φ 3 s (), t () t = s () t () t dt = s () t s () t dt = 3 3 3 φ 3 φ 3 [ ] s (), t () t = s () t () t dt = s () t s () t s () t dt = dt = g () t = s () t φ () t = s () t s () t + φ () t = s () t s () t + s () t 3 3 3 3 3 3 3 g () t = g () t dt = [ s () t s ( t) + s () t ] dt = dt = = E g3() t s3() t φ() t φ3() t = = s3() t = φ() t + φ3() t g () t 3 3 3 3 3 4. 4 = 4 4 4 4 3 3 g () t s () t s (), t φ () t φ () t s (), t φ () t φ () t s (), t φ () t φ () t 3 4 φ 4 φ s (), t () t = s () t () t dt = dt = 3 4 φ 4 φ s (), t () t = s () t () t dt = dt = 7

3 3 4 φ3 4 φ3 s (), t () t = s () t () t dt = dt = [ ] g () t = s () t φ () t φ () t φ () t = s () t φ () t + φ () t + φ () t 4 4 3 4 3 But, φ ( t), =,, 3 are descrbed by φ () t φ () t φ () t 3 3 t 3 t 3 t g () t = s () t = φ () t + φ () t + φ () t 4 4 3 Hence, s () t = φ () t s () t = φ () t + φ () t s () t = φ () t + φ () t 3 3 7

φ () t 3 s 3 φ () t φ () t s4 s s Clearly, { s ()} 4 t = coordnates φ, φ and φ. 3 s defned on the 3-dmensonal Eucldean space represented by the Let the sgnal arrvng at the recever be descrbed by xt () = s() t + W(), t =,, M, where s () t s the transmtted sgnal and W(t) s WG wth zero mean and power spectral densty Sw( f ) Watts Hz, f. sgnal space S,.e., = Let { φ j t } j= j j j= (), s () t = s φ (), t =,, M. M be an orthonormal bass for the 7

Consder a coherent correlator recever and the observed output at the th correlator,.e., x(t) φ () t φ φ () t () t T s T s T s ( ) dt ( ) dt ( ) dt t = T s t = T s t = T s X X X Coherent correlator recever 73

Let s () t or symbol m be transmtted through the channel, then the output of the th correlator s gven by Ts Ts Ts X m = x() t φ () [ () ()] () () () () t dt = s t + W t φ t dt == s jφj t + W t φ t dt j= Ts Ts j φj φ φ j= = s () t () t dt + W () t () t dt = s + W, =,,, where, T s s = s () t () t dt and φ T s W = W () t φ () t dt. Defne a new r.p. x' () t by T [ ] [ φ φ ] T x () t xt () Xφ() t = xt () X Φ(), t = where X = X X s the projecton of x(t) onto the sgnal space S and Φ() t = () t () T t. 74

Hence, [ ] [ ] xt () = st () + Wt () Xφ () t = st () + Wt () s + W φ () t = = = sφ () t+ Wt () sφ () t Wφ () t = Wt () Wφ () t = = = = W = Wt () φ () t φ () t Wt () Φ () tw= Wφ () t = W (), t where, = [ ] T = + W T W W, W s the projecton of the nose onto the space S (sgnal space) and W' () t s the part of the nose W(t) that does not le on the sgnal space S. Therefore, xt () = Xφ () t + W () t, whch means that we must only worry about the part of the = nose that les on the sgnal space, namely, the part of the nose whch s not n the sgnal space does not affect the output of the correlators. 75

Defne Wt () W() t+ W(), t where r p W () t Wφ (). t r = Then x() t = s () t + W () t + W () t, where W () t = W () t r p ow, f x(t) s a Gaussan r.p., then X m s a Gaussan r.v. wth mean Ts T s = E sj φ () () () () j t φ t dt m + E W t φ t dt m j= { } { T } s T s X m = E X m = E x() t φ () () () () t dt m = E s jφj t + W t φ t dt m j= Ts Ts = E sj φ () () () () j t φ t dt + W t φ t dt m j= = s + { } T s and varance { ()} EWt φ () t dt = s p 76

{ } { } Ts Ts σ X () ( ) ( ) () m = E X s m = E W m = E W t W τφ τφ t dtdτ Ts Ts Ts Ts { () ( )} () ( ) ( ) () ( ) = E W t W τ φ t φ τ dtdτ = δ t τ φ t φ τ dtdτ Ts φ τφ τ τ = ( ) ( ) d =, =,,. Also, for j, {( )( ) } { } T * s Ts * () () * E X j sj X s m = E WW j m = E W t φj t W ( τφ ) ( τ) dtdτ Ts Ts Ts j j = δ( t τφ ) ( τφ ) ( t) dtdτ = φ( τφ ) ( τ) dτ = 77

Ths means that the X s are mutually uncorrelated X s are statstcally ndependent T, because they are Gaussan. Hence, the jont densty functon of = [ ] gven that message m has been transmtted s gven by ( ) ( ) ( ) f x m = f x,, x m = f x m, =,, M X X X = ( x s ) = e = π ( π ) ( ) = = x s e. X X X Defne the Eucldean dstance between vectors u and v by u v = u v + + u v = u v / ( ) ( ) ( ) T. and = [ ] = v v v Then fx ( xm) = exp x s,,, = M π ( ) /, where = [ ] u u u T. where s = [ s s s ] T 78

Let s be transmtted and X be the observaton vector of the sampled values of the T correlators, then X = [ X X ] = s + W W = [ W W ],. Decson strategy: Gven the observaton vector X = x, choose the symbol m so that the probablty of mang a decson error s mnmum. Let symbol (sgnal) P m x denote the condtonal probablty of mang a decson error gven that x s observed, then m be sent through the channel and let e( ) ( ) { select, } = { select } Pe m x P m x P m x Optmum Decson Rule: Pe( m x ) s mnmum whenever { select } Equvalently, choose m f { } { } P select m x P select m x,, =,, M. T P m x s maxmum. Ths s nown as the Maxmum A Posteror (MAP) probablty. Equvalently, applyng Bayes rule, yelds: p fx ( xm) Choose symbol m f s maxmum for =, =,, M, fx ( x) 79

where p s the a pror probablty of occurrence of the symbol lelhood functon that results when p.d.f. of X. m s transmtted, and f ( x) X m, fx ( xm) s the s the uncondtonal The equvalent rule comes from the fact that, n the lmt, Bayes rule, as appled to a contnuous r.v. s gven by fx ( xa) P{ A} P{ AX= x} =, f x X ( ) where A s an event of selectng symbol m. The dstrbuton of X s ndependent of the transmtted sgnal. Therefore, f p = p,.e., all symbols are transmtted wth equal probablty, then the optmum decson rule can be stated as Choose m f f ( x m ) X s maxmum for =, =,, M. Ths s the maxmum lelhood decson rule and s based on Baysan statstcs. 8

Fnally, snce the lelhood functon s non-negatve because t s a probablty densty functon, we can restate the optmum decson rule as choose m f ln fx ( xm) monotoncally ncreasng functon of the argument. s maxmum for =, =,, M, snce ln ( ) s a Remar: The maxmum lelhood decson rule dffers from the MAP decson rule n that t assumes equally lely message symbols. For an AWG channel, the condtonal pdf of the observaton vector x gven that symbol m was transmtted s descrbed by x s fx ( xm) = e = M Hence, ( π ),,,. ln f ( xm ) = ln [ π ] x s, =,, M. X 8

But, { ( ) } [ π ] max ln fx x m max ln = x s = max x s = mn x s, { x s } mn{ x s } = mn = snce multplcaton by the postve constant mnmum. does not change the locaton of the Geometrcally speang, f we partton the -dmensonal sgnal space nto M regons, R,,R M, then the decson rule can be reformulated as follows: X les nsde R f ln fx ( xm) Therefore, s maxmum for =, =,, M. X les nsde R f the Eucldean dstance x s s mnmum for =, =,, M.e., choose m f the dstance between x and s s mnmum. 8

Error Performance of MAP Recevers If symbol m (sgnal vector s ) s transmtted and x does not le n R, then an error occurs. Therefore, the average probablty of symbol error (SER) s M { does not le n and was sent} P = P X R m e = M = = { does not le n was sent } { was sent} P X R m P m If Pm { was sent } =,, =,, M, then M M Pe = P{ X does not le n R m sent} M = or M P = P{ X les n R m sent }, e M = P X les n R m sent = f x m dx. where { } X ( ) R 83

Example: Let m(t) be a bnary sgnal transmtted over an AWG channel. Let m(t) be represented by a bpolar nonreturn to zero waveform wth ampltude A. Then s () t A, t T b A, t T = and s() t = b., otherwse, otherwse Let us apply the Gram-Schmdt procedure. Let g (t) = s (t), then () T b ( b) g t = A dt = A T = A T b g() t, t Tb φ() t = = s() t = Tb s() t = A Tb φ() t g() t A Tb, otherwse 84

Let φ g () t = s () t s (), t () t φ () t, then Tb Tb A (), φ() () φ () T b s t t = s t t dt = dt = A T Hence, ( b φ ) g () t = s () t T A () t = s () t + A T φ () t = s () t + s () t = b b Therefore, = = bφ s () t s () t A T (). t Ths s called antpodal sgnalng (one bnary symbol s represented by a sgnal whch s the negatve of the other). Furthermore, only one bass functon s needed to represent the two bnary sgnals. Consder the followng correlator recever: 85

Then the sgnal constellaton dagram and the observaton space are shown n the fgure below. R R s = s = A T b s = s = A T = d b φ, X Sgnal constellaton dagram and observaton space Clearly, R = { x: x } and R = { x: x< } If the two symbols are equally lely to be transmtted, then the average probablty of symbol error s gven by Pe = P{ X les n R m sent }. = 86

The pdf of the observaton X, gven that m was transmtted s ( x d ) ( ) ( ) x d f ( xm) = e = e, d Xm = A T π π Then the probablty of a correct decson, gven that m was transmtted s ( ) = ( x d ) P X R m e dx π u = x d, du = dx, dx = du, Let ( ) b and u ( x d ) ( x d ) u ( x d ) = = =. 87

Hence, d u u u P( X R m) = e du = e du e du π = π π d d u = e du = Q d = P X R m π d ( ) Therefore, the average probablty of a bnary error (BER) s gven by Pe = Q d = + Q d = Q d d d d Fnally, d = Pe = Q Q =, where d s the dstance between sgnal ponts and. 88

Effect of Rotaton: Let sr, = s and xr = s + W, then xr sr, = s+ W ( s) = W = x s, =, the dstance depends on the nose alone and Pe s nvarant to rotaton! Effect of Translaton: Suppose now that st, = s a, =, and x t=x-a, then xt st, = x a s+ a = x s = s+ w s = w dstance agan depends on the nose alone and Pe s nvarant to translaton. Remar: Rotatonal nvarance holds only when the rotaton s caused by an orthonormal transformaton matrx Q,.e., for x Q, xr = Qx and QQ T = I. x 89

Let P e (m ) be the condtonal probablty of symbol error when symbol m s sent. Let A,, =,, M, denote the event that the observaton vector x s closer to the sgnal vector s than to s, when m( s) s sent. Then M M Pe ( m ) = P A P{ A}, =,, M. = = Equalty holds only when the events A are mutually exclusve. ote that P{A } s a par-wse probablty of a data transmsson system that uses only a par of sgnals, s and s. Ths s dfferent than P { m m m} ˆ =, the probablty that the observaton vector x s closer to the sgnal vector s than any other when s( m) s sent. 9

Example: A message source outputs of 4 symbols every T s seconds wth equal probablty. The 4 symbols have the sgnal constellaton and observaton space shown n the fgure below. φ, X s R R R 4 φ, X s s 4 R 3 s 3 9

Suppose the observaton vector x at the nput of the decson devce at the recever les on the followng regon: s X s 4 s R ' X x s 3 even though s was sent. Then P m = P{ X R m} e ( ) les n. 9

ow, the events A, A3, and A4 are equvalent to havng x le n each one of the followng regons: d X X s s s s 4 s 4 d 3 / X s X R x s 3 x s 3 R 3 A A3 93

X s d 4 s 4 s X x s 3 R 4 or A4 { } + { 3} + { 4} = { les n R } + { les n R 3 } + { les n R 4 } P{ X les n R m } = P( m ). P A P A P A P X m P X m P X m e 94

ˆ 3 les n R 3, Fnally, P{ m= m m} = P{ x m} where R 3 s depcted below X s s 4 s X x s3 R 3 95

We already now that n an AWG channel an error s caused by the nose. Moreover, WG s dentcally dstrbuted along any set of orthogonal axes. Accordng to these crtera, an error s made when m s sent (vector s ) and x les n R, or ( /) / x d d u u P{ A} = e dx = e du e du π = π d π where d s s and u x d / =. Usng the defnton of the Q functon and the error functon complement, we get u d d P{ A} = e dv = Q = erfc, π d snce λ erfc( z) e dλ, z. π z 96

Thus, M d Pe( m) Q, =,,, M = d ( ), sent. M M M and P = pp m pq p = P{ m } e e = = = Up to ths pont, our space sgnal analyss has been carred out assumng a correlator recever archtecture, even though we already clamed that the optmum recever for an AWGM channel uses a matched flter. 97

Consder the followng matched flter detector for M-ary transmsson over an AWG channel. φ ( T t) y () t X s (t) Σ x(t) φ ( T t) y () t t = T X W(t) φ ( T t ) y () t t = T X t = T Matched flter ban for M-ary sgnalng 98

ow, y ( t) = x( τφ ) ( T ( t τ)) dτ At t = T, y( T) = x( τφ ) ( τ) dτ. Moreover, () t φ = for t [ T] T = = y ( T) X x( τφ ) ( τ) dτ,,. Hence, whch s the same as the output of the th branch of the correlator recever! the two recevers are equvalent. Ths means that we have the choce of usng ether a generalzed correlator recever or a ban of matched flters matched to each bass functon of the set that descrbes the sgnal space. In our wor, we shall mostly use correlator recever mplementatons to analyze and assess communcaton system performance. 99