Chapter 7 Channel Capacity and Coding

Similar documents
Chapter 7 Channel Capacity and Coding

EGR 544 Communication Theory

Mathematical Models for Information Sources A Logarithmic i Measure of Information

Lecture 3: Shannon s Theorem

ECE 534: Elements of Information Theory. Solutions to Midterm Exam (Spring 2006)

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Error Probability for M Signals

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Consider the following passband digital communication system model. c t. modulator. t r a n s m i t t e r. signal decoder.

VQ widely used in coding speech, image, and video

Digital Modems. Lecture 2

Communication with AWGN Interference

Composite Hypotheses testing

Quantum and Classical Information Theory with Disentropy

Introduction to Information Theory, Data Compression,

6. Stochastic processes (2)

6. Stochastic processes (2)

Pulse Coded Modulation

Assuming that the transmission delay is negligible, we have

Lecture Notes on Linear Regression

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

Continuous Time Markov Chain

Limited Dependent Variables

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Convergence of random processes

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

Chapter 1. Probability

Channel Encoder. Channel. Figure 7.1: Communication system

Applied Stochastic Processes

Assignment 2. Tyler Shendruk February 19, 2010

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Dynamic Systems on Graphs

Optimal information storage in noisy synapses under resource constraints

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

The Feynman path integral

ECE559VV Project Report

Discrete Memoryless Channels

Source-Channel-Sink Some questions

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE

Differentiating Gaussian Processes

Probability Theory. The nth coefficient of the Taylor series of f(k), expanded around k = 0, gives the nth moment of x as ( ik) n n!

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

A Mathematical Theory of Communication. Claude Shannon s paper presented by Kate Jenkins 2/19/00

Signal space Review on vector space Linear independence Metric space and norm Inner product

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

S Advanced Digital Communication (4 cr) Targets today

Multi-dimensional Central Limit Theorem

State Amplification and State Masking for the Binary Energy Harvesting Channel

MIMO Systems and Channel Capacity

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

Probability and Random Variable Primer

Strong Markov property: Same assertion holds for stopping times τ.

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

II.D Many Random Variables

Chapter 8 SCALAR QUANTIZATION

Multi-dimensional Central Limit Argument

Externalities in wireless communication: A public goods solution approach to power allocation. by Shrutivandana Sharma

Low Complexity Soft-Input Soft-Output Hamming Decoder

Physics 607 Exam 1. ( ) = 1, Γ( z +1) = zγ( z) x n e x2 dx = 1. e x2

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Changing Topology and Communication Delays

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

Bounds on the Effective-length of Optimal Codes for Interference Channel with Feedback

NUMERICAL DIFFERENTIATION

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

CSCE 790S Background Results

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Secret Communication using Artificial Noise

COMPARING NOISE REMOVAL IN THE WAVELET AND FOURIER DOMAINS

The Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor

Classification as a Regression Problem

Lecture 3: Probability Distributions

Linear Approximation with Regularization and Moving Least Squares

Bayesian decision theory. Nuno Vasconcelos ECE Department, UCSD

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

Numerical Heat and Mass Transfer

Maximum Likelihood Estimation

On the Correlation between Boolean Functions of Sequences of Random Variables

DO NOT OPEN THE QUESTION PAPER UNTIL INSTRUCTED TO DO SO BY THE CHIEF INVIGILATOR. Introductory Econometrics 1 hour 30 minutes

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

Homework Assignment 3 Due in class, Thursday October 15

A be a probability space. A random vector

Introduction to Random Variables

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

be a second-order and mean-value-zero vector-valued process, i.e., for t E

CS 798: Homework Assignment 2 (Probability)

Energy Efficient Resource Allocation for Quantity of Information Delivery in Parallel Channels

STATISTICAL MECHANICS

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

The Concept of Beamforming

Transcription:

Wreless Informaton Transmsson System Lab. Chapter 7 Channel Capacty and Codng Insttute of Communcatons Engneerng atonal Sun Yat-sen Unversty

Contents 7. Channel models and channel capacty 7.. Channel models Bnary symmetrc channel Dscrete memoryless channels Dscrete-nput, contnuous-output channel Waveform channels 7.. Channel capacty

Introducton In chapter 5, we demonstrate that orthogonal sgnalng waveforms allow us to make the probablty of error arbtrarly small by lettng the number of waveforms M, provded that the SR per bt γ b -.6dB. Thus, we can operate at the capacty of the addtve whte Gaussan nose channel n the lmt as the bandwdth expanson factor B e =W/R. Ths s a heavy prce to pay, because B e grows exponentally wth the block length k. Such neffcent use of channel bandwdth s hghly undesrable. 3

Introducton Code waveforms offer the potental for greater bandwdth effcency than orthogonal M-ary waveforms snce the bandwdth expanson factor that grows only lnearly wth k. We shall observe that, n general, coded waveforms offer performance advantages not only n power-lmted applcatons where R/W <, but also n bandwdthlmted systems where R/W >. 4

7.. Channel Models Bnary symmetrc channel (BSC) If the channel nose and other dsturbances cause statstcally ndependent errors n the transmtted bnary sequence wth average probablty p, then ( = X = ) = P( Y = X = ) = p ( = X = ) = P( Y = X = ) = p P Y P Y 5

7.. Channel Models Dscrete memoryless channels (DMC) BSC s a specal case of a more general dscrete-nput, dscrete-output channel. Output symbols from the channel encoder are q-ary symbols,.e., X={x, x,, x q- }. Output of the detector conssts of Q-ary symbols, where Q M= q. If the channel and modulaton are memoryless, we have a set of qq condtonal probabltes: P Y = y X = x P y x ( ) ( ) where =,,,Q- and j=,,,q-. Such a channel s called a dscrete memoryless channel (DMC). 6 j j

7.. Channel Models Dscrete memoryless channels (DMC) Input u,u,,u n Output: v,v,,v n The condtonal probablty s gven by: ( = v, Y = v,...,y = v X = u,..., X = u ) P Y = n k = ( = v X = u ) P Y k n In general, the condtonal probabltes P(y j x ) can be arranged n the matrx form P=[p j ], called probablty transton matrx. n k n Dscrete q-ary nput, Q-ary output channel 7

7.. Channel Models Dscrete-nput, contnuous-output channel p Dscrete nput alphabet X={x,x,,x q- }. Output of the detector s unquantzed (Q= ). The most mportant channel of ths type s the addtve whte Gaussan nose (AWG) channel, for whch Y = X + G where G s a zeor-mean Gaussan random varable wth varance σ and X=x k, k=,,,q-. ( y x ) ( ) k σ p y X = xk = e πσ ( y ), y,..., yn X = u, X = u,..., X n = un = p( y X = u ) 8 n =

Waveform channels 7.. Channel Models Assume that a channel has a gven bandwdth W, wth deal frequency response C( f )= wthn the bandwdth W, and the sgnal at ts output s corrupted by AWG: y(t)=x(t)+n(t). Expand y(t), x(t), and n(t) nto a complete set of orthonormal functons: y t = yf t, x t = x f t, n t = n f t. ( ) ( ) ( ) ( ) ( ) ( ) T T * * () () () () () y = y t f t dt = x t + n t f t dt = x + n T f * () t f () t j dt = δ 9 j = ( = j) ( j)

Waveform channels Snce y =x +n, t follows that: p ( y x ) = 7.. Channel Models e πσ ( ) y x σ, = Snce the functons {f (t)} are orthonormal, t follows that the {n } are uncorrelated. Snce they are Gaussan, they are also statstcally ndependent: p Samples of x(t) and y(t) may be taken at the yqust rate of W samples per second. Thus, n a tme nterval of length T, there are =WT samples.,,... ( y ), y,..., y x, x,..., x = p( y x ) =

7.. Channel Capacty Consder a DMC havng an nput alphabet X={x,x,,x q- }, an output alphabet Y={y,y,,y Q- }, and the set of transton probabltes P(y,x j ). The mutual nformaton provded about the event X=x j by the occurrence of the event Y=y s log[p(y x j )/P(y )], where ( ) ( ) ( ) ( ) q = P y P Y = y P xk P y xk k = Hence, the average mutual nformaton provded by the output Y about the nput X s: ( xj) q Q P y I( X; Y) = P( xj) P( y xj) log P y ( ) j= =

7.. Channel Capacty The value of I(X;Y) maxmzed over the set of nput symbol probabltes P(x j ) s a quantty that depends only on the characterstcs of the DMC through the condtonal probabltes P(y x j ). Ths quantty s called the capacty of the channel and s denoted by C: C = max I( X; Y) P( xj ) q Q = ( ) ( ) max P xj P y xj log P( xj ) ( x ) P y ( ) j= = The maxmzaton of I(X;Y) s performed under the constrants q that P x and P x =. ( ) ( ) j j j= P y j

7.. Channel Capacty Example 7.- BSC wth transton probabltes P( )=P( )=p. The average mutual nformaton s maxmzed when the nput probabltes P()=P()=½. The capacty of the BSC s C = p log p + p log p = where H(p) s the bnary entropy functon. ( ) ( ) H ( p) 3

7.. Channel Capacty Consder the dscrete-tme AWG memoryless channel descrbed by ( y x ) ( ) k σ p y X = xk = e πσ The capacty of ths channel n bts per channel use s the maxmum average mutual nformaton between the dscrete nput X={x,x,,x q- } and the output Y={,- }: where C = max ( x ) q P = p p ( y x ) P( x ) q 4 log ( y) p( y x ) P( ) = k = k x k P ( y x ) P( y) dy

7.. Channel Capacty Example 7.-. Consder a bnarynput AWG memoryless channel wth possble nputs X=A and X=-A. The average mutual nformaton I(X;Y) s maxmzed when the nput probabltes are P(X=A)=P(X=- A)=½. ( ) p( y) p( y A) p( y) p y A C = p( y A) log dy + p ( y A) log dy 5

7.. Channel Capacty It s not always the case to obtan the channel capacty by assumng that the nput symbols are equally probable. othng can be sad n general about the nput probablty assgnment that maxmzes the average mutual nformaton. It can be shown that the necessary and suffcent condtons for the set of nput probabltes {P(x j )} to maxmze I(X;Y) and to acheve capacty on a DMC are: I ( x ) ( ) j; Y = C for all j wth P x j > I x ; Y C for all j wth P x = ( ) ( ) j where C s the capacty of the channel and Q ( ) ; = ( ) P y I x j Y P y x j log P = 6 j ( x ) ( y ) j

7.. Channel Capacty Consder a band-lmted waveform channel wth AWG. The capacty of the channel per unt tme has been defned by Shannon (948) as C = lm max I ( X ; Y ) T p( x) T Alternatvely, we may use the samples or the coeffcents {y }, {x }, and {n } n the seres expansons of y(t), x(t), and n(t) to determne the average mutual nformaton between x =[x x x ] and y =[y y y ], where =WT, y = x + n. p ( ) ( ) ( ) ( y x ) I X ; Y =...... p y x p x log dx dy x y p y = = p ( y x ) p( x ) 7 log ( ) ( y x ) dydx ( y ) p p (7.-4)

7.. Channel Capacty where ( y x ) p( y x ) = e π The maxmum of I(X;Y) over the nput PDFs p(x ) s obtaned when the {x } are statstcally ndependent zero-mean Gaussan random varables,.e., p ( ) x σ x x = e πσ From 7.-4 σ x σ x max I( X; Y) = log + = log + px ( ) = σ x = WT log + =WT 8 x

7.. Channel Capacty If we put a constrant on the average power n x(t),.e., T () ( σ ) x Pav = E x t dt E x T = = T T TPav Pav σ x = = W P ( ) ( ) = + av max I X ; Y WT log p x W Dvdng both sdes by T and we can obtan the capacty of the band-lmted AWG waveform channel wth a band-lmted and average power-lmted nput: P = + av C W log W 9 =

7.. Channel Capacty ormalzed channel capacty as a functon of SR for band-lmted AWG channel Channel capacty as a functon of bandwdth wth a fxed transmtted average power

7.. Channel Capacty ote that as W approaches nfnty, the capacty of the channel approaches the asymptotc value C = Pav log e = Pav ln bts/s Snce P av represents the average transmtted power and C s the rate n bts/s, t follows that P = Cε av b Hence, we have C W C log + W ε = b Consequently ε C C W W b =

When C/W=, ε b / = ( db). 7.. Channel Capacty When C/W, CW ε b C C exp ln ln C W W W ε b ncreases expontally as CW. When C/W C W ε b = lm C W C W = ln

7.. Channel Capacty The channel capacty formulas serve as upper lmts on the transmsson rate for relable communcaton over a nosy channel. osy channel codng theorem by Shannon (948) There exst channel codes (and decoders) that make t possble to acheve relable communcaton, wth as small an error probablty as desred, f the transmsson rate R<C, where C s the channel capacty. If R>C, t s not possble to make the probablty of error tend toward zero wth any code. 3