Chapter 7 Channel Capacity and Coding

Similar documents
Chapter 7 Channel Capacity and Coding

Lecture 3: Shannon s Theorem

EGR 544 Communication Theory

Mathematical Models for Information Sources A Logarithmic i Measure of Information

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Error Probability for M Signals

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Communication with AWGN Interference

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

VQ widely used in coding speech, image, and video

Composite Hypotheses testing

Digital Modems. Lecture 2

Pulse Coded Modulation

Assignment 2. Tyler Shendruk February 19, 2010

Limited Dependent Variables

Dynamic Systems on Graphs

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Assuming that the transmission delay is negligible, we have

Lecture Notes on Linear Regression

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

Quantum and Classical Information Theory with Disentropy

Channel Encoder. Channel. Figure 7.1: Communication system

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

6. Stochastic processes (2)

Chapter 1. Probability

6. Stochastic processes (2)

Multi-dimensional Central Limit Theorem

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Introduction to Information Theory, Data Compression,

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Discrete Memoryless Channels

ECE559VV Project Report

Optimal information storage in noisy synapses under resource constraints

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Source-Channel-Sink Some questions

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Multi-dimensional Central Limit Argument

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

Applied Stochastic Processes

Probability and Random Variable Primer

Strong Markov property: Same assertion holds for stopping times τ.

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Continuous Time Markov Chain

Convergence of random processes

Linear Approximation with Regularization and Moving Least Squares

Maximum Likelihood Estimation

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

A be a probability space. A random vector

Signal space Review on vector space Linear independence Metric space and norm Inner product

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

The Feynman path integral

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

II.D Many Random Variables

Classification as a Regression Problem

Modelli Clamfim Equazioni differenziali 7 ottobre 2013

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

S Advanced Digital Communication (4 cr) Targets today

Differentiating Gaussian Processes

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

State Amplification and State Masking for the Binary Energy Harvesting Channel

Modelli Clamfim Equazioni differenziali 22 settembre 2016

Homework Assignment 3 Due in class, Thursday October 15

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Chapter 8 SCALAR QUANTIZATION

A Mathematical Theory of Communication. Claude Shannon s paper presented by Kate Jenkins 2/19/00

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

Introduction to Random Variables

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Changing Topology and Communication Delays

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Goodness of fit and Wilks theorem

The Concept of Beamforming

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

A random variable is a function which associates a real number to each element of the sample space

NUMERICAL DIFFERENTIATION

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

Lecture 4 Hypothesis Testing

COMPARING NOISE REMOVAL IN THE WAVELET AND FOURIER DOMAINS

Physics 607 Exam 1. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2

Bounds on the Effective-length of Optimal Codes for Interference Channel with Feedback

STATISTICAL MECHANICS

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

Simulation and Random Number Generation

Lecture 3: Probability Distributions

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015

Transcription:

Chapter 7 Channel Capacty and Codng

Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform channels 7.. Channel capacty

7.. Channel Models Bnary symmetrc channel (BSC) If the channel nose and other dsturbances cause statstcally ndependent errors n the transmtted bnary sequence wth average probablty p, then ( = X = ) = P( Y = X = ) = p ( = X = ) = P( Y = X = ) = p P Y P Y 3

7.. Channel Models Dscrete memoryless channels (DMC) BSC s a specal case of a more general dscrete-nput, dscrete-output channel. Output symbols from the channel encoder are q-ary symbols,.e., X={x,x,,x q- }. Output of the detector conssts of Q-ary symbols, where Q M= q. If the channel and modulaton are memoryless, we have a set of qq condtonal probabltes: P ( Y = y ) ( ) X = x P y x where =,,,Q- and =,,,q-. Such a channel s called a dscrete memoryless channel (DMC). 4

7.. Channel Models Dscrete memoryless channels (DMC) Input u,u,,u n Output: v,v,,v n The condtonal probablty s gven by: ( = v, Y = v,...,y = v X = u,..., X = u ) P Y = n k = ( = v X = u ) P Y k n In general, the condtonal probabltes P(y x ) can be arranged n the matrx form P=[p ], called probablty transton matrx. n k 5 n Dscrete q-ary nput, Q-ary output channel

7.. Channel Models Dscrete-nput, contnuous-output channel p Dscrete nput alphabet X={x,x,,x q- }. Output of the detector s unquantzed (Q= ). The most mportant channel of ths type s the addtve whte Gaussan nose (AWG) channel, for whch Y = X + G where G s a zeor-mean Gaussan random varable wth varance σ and X=x k, k=,,,q-. p( y X = xk ) = e πσ 6 ( y x ) k σ ( y ), y,..., yn X = u, X = u,..., X n = un = p( y X = u ) n =

Waveform channels 7.. Channel Models Assume that a channel has a gven bandwdth W, wth deal frequency response C( f )= wthn the bandwdth W, and the sgnal at ts output s corrupted by AWG: y(t)=x(t)+n(t). Expand y(t), x(t), and n(t) nto a complete set of orthonormal functons: ( ) = ( ) ( ) = ( ) ( ) = ( ) y t y f t, x t x f t, n t n f t. T T * * () () () () () y = y t f t dt = x t + n t f t dt = x + n T f * () t f () t dt = δ = ( = ) ( ) 7

Waveform channels Snce y =x +n, t follows that: y x p( y x ) = e πσ 7.. Channel Models ( ) σ, = Snce the functons {f (t)} are orthonormal, t follows that the {n } are uncorrelated. Snce they are Gaussan, they are also statstcally ndependent: p Samples of x(t) and y(t) may be taken at the yqust rate of W samples per second. Thus, n a tme nterval of length T, there are =WT samples. 8,,... ( y ), y,..., y x, x,..., x = p( y x ) =

7.. Channel Capacty Consder a DMC havng an nput alphabet X={x,x,,x q- }, an output alphabet Y={y,y,,y Q- }, and the set of transton probabltes P(y,x ). The mutual nformaton provded about the event X=x by the occurrence of the event Y=y s log[p(y x )/P(y )], where ( ) ( ) ( ) ( ) q P y = P Y = y P xk P y xk k = Hence, the average mutual nformaton provded by the output Y about the nput X s: ( ) q ( ) Q ( ) ( ) P y x I X ; Y P x P y x log P y = = ( ) 9

7.. Channel Capacty The value of I(X;Y) maxmzed over the set of nput symbol probabltes P(x ) s a quantty that depends only on the characterstcs of the DMC through the condtonal probabltes P(y x ). Ths quantty s called the capacty of the channel and s denoted by C: ( ) C = max I X; Y Px ( ) ( x ) q Q P y max ( ) ( ) P x P y x log Px ( ) P y = ( ) = = The maxmzaton of I(X;Y) s performed under the constrants that q P( x ) and ( ) P x =. =

7.. Channel Capacty Example 7.- BSC wth transton probabltes P( )=P( )=p. The average mutual nformaton s maxmzed when the nput probabltes P()=P()=½. The capacty of the BSC s C = p log p + ( p) log ( p) = H ( p) where H(p) s the bnary entropy functon.

7.. Channel Capacty Consder the dscrete-tme AWG memoryless channel descrbed by ( y x ) ( ) k σ p y X = xk = e πσ The capacty of ths channel n bts per channel use s the maxmum average mutual nformaton between the dscrete nput X={x,x,,x q- } and the output Y={,- }: where C = max ( x ) q P = p p ( y x ) P( x ) q log ( y) p( y x ) P( ) = k = k x k P ( y x ) P( y) dy

7.. Channel Capacty Example 7.-. Consder a bnarynput AWG memoryless channel wth possble nputs X=A and X=-A. The average mutual nformaton I(X;Y) s maxmzed when the nput probabltes are P(X=A)=P(X=- A)=½. ( ) p( y) p( y A) p( y) p y A C = p( y A) log dy + p ( y A) log dy 3

7.. Channel Capacty It s not always the case to obtan the channel capacty by assumng that the nput symbols are equally probable. othng can be sad n general about the nput probablty assgnment that maxmzes the average mutual nformaton. It can be shown that the necessary and suffcent condtons for the set of nput probabltes {P(x )} to maxmze I(X;Y) and to acheve capacty on a DMC are: I I ( x ) ( ) ; Y = C for all wth P x > ( x ; Y ) C for all wth P( x ) = where C s the capacty of the channel and Q ( ) ; = ( ) P y I x Y P y x log P = 4 ( x ) ( y )

7.. Channel Capacty Consder a band-lmted waveform channel wth AWG. The capacty of the channel per unt tme has been defned by Shannon (948) as C = lm max I( X ; Y ) T p( x) T Alternatvely, we may use the samples or the coeffcents {y }, {x }, and {n } n the seres expansons of y(t), x(t), and n(t) to determne the average mutual nformaton between x =[x x x ] and y =[y y y ], where =WT, y = x + n. I p ( ) ( ) ( ) ( y x ) X Y =...... p y x p x log x y p( y ) p ( ) ( ) ( y x ) = p y x p x log dydx p( y ) ; dx dy = 5

7.. Channel Capacty where ( y x ) p( y x ) = e π The maxmum of I(X;Y) over the nput PDFs p(x ) s obtaned when the {x } are statstcally ndependent zero-mean Gaussan random varables,.e., p ( ) x σ x x = e πσ σ x σ x max I( X; Y) = log + = log + px ( ) = σ x = WT log + x 6

7.. Channel Capacty If we put a constrant on the average power n x(t),.e., T () ( σ ) x Pav = E x t dt E x T = = T T TPav Pav σ x = = W P ( ) ( ) = + av max I X ; Y WT log p x W Dvdng both sdes by T and we can obtan the capacty of the band-lmted AWG waveform channel wth a band-lmted and average power-lmted nput: C 7 = P log + av W W =

7.. Channel Capacty ormalzed channel capacty as a functon of SR for band-lmted AWG channel Channel capacty as a functon of bandwdth wth a fxed transmtted average power 8

7.. Channel Capacty ote that as W approaches nfnty, the capacty of the channel approaches the asymptotc value Pav Pav C = log e = bts/s ln Snce P av represents the average transmtted power and C s the rate n bts/s, t follows that Hence, we have Consequently C W P av = Cε 9 b C log + W ε = b ε C C W W b =

When C/W=, ε b / = ( db). 7.. Channel Capacty When C/W, CW εb C C exp ln ln C W W W When C/W ε b = C C lm W C W W = ln

7.. Channel Capacty The channel capacty formulas serve as upper lmts on the transmsson rate for relable communcaton over a nosy channel. osy channel codng theorem by Shannon (948) There exst channel codes (and decoders) that make t possble to acheve relable communcaton, wth as small an error probablty as desred, f the transmsson rate R<C, where C s the channel capacty. If R>C, t s not possble to make the probablty of error tend toward zero wth any code.