EE 4TM4: Digital Communications II Scalar Gaussian Channel

Similar documents
EE 4TM4: Digital Communications II. Channel Capacity

Solutions to Homework Set #4 Differential Entropy and Gaussian Channel

Chapter 8: Differential entropy. University of Illinois at Chicago ECE 534, Natasha Devroye

Chapter 4: Continuous channel and its capacity

Lecture 8: Channel Capacity, Continuous Random Variables

Lecture 14 February 28

Lecture 2. Capacity of the Gaussian channel

LECTURE 18. Lecture outline Gaussian channels: parallel colored noise inter-symbol interference general case: multiple inputs and outputs

Multiple Antennas in Wireless Communications

Multiuser Capacity in Block Fading Channel

Morning Session Capacity-based Power Control. Department of Electrical and Computer Engineering University of Maryland

Revision of Lecture 5

Appendix B Information theory from first principles

A Proof of the Converse for the Capacity of Gaussian MIMO Broadcast Channels

Lecture 5 Channel Coding over Continuous Channels

Lecture 4. Capacity of Fading Channels

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

ELG7177: MIMO Comunications. Lecture 3

Gaussian channel. Information theory 2013, lecture 6. Jens Sjölund. 8 May Jens Sjölund (IMT, LiU) Gaussian channel 1 / 26

Chapter 9. Gaussian Channel

ELEC546 Review of Information Theory

Lecture 7 MIMO Communica2ons

On the Secrecy Capacity of Fading Channels

Energy State Amplification in an Energy Harvesting Communication System

Diversity-Multiplexing Tradeoff in MIMO Channels with Partial CSIT. ECE 559 Presentation Hoa Pham Dec 3, 2007

Dirty Paper Coding vs. TDMA for MIMO Broadcast Channels

Lecture 5: Antenna Diversity and MIMO Capacity Theoretical Foundations of Wireless Communications 1. Overview. CommTh/EES/KTH

Multiple-Input Multiple-Output Systems

Optimal Data and Training Symbol Ratio for Communication over Uncertain Channels

Capacity of the Discrete Memoryless Energy Harvesting Channel with Side Information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Information Dimension

ECE 4400:693 - Information Theory

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

WE study the capacity of peak-power limited, single-antenna,

EE376A: Homeworks #4 Solutions Due on Thursday, February 22, 2018 Please submit on Gradescope. Start every question on a new page.

Computing and Communications 2. Information Theory -Entropy

Advanced Topics in Digital Communications Spezielle Methoden der digitalen Datenübertragung

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY UNIT 3 RANDOM PROCESS TWO MARK QUESTIONS

Lecture 9: Diversity-Multiplexing Tradeoff Theoretical Foundations of Wireless Communications 1

Principles of Communications

Lecture 6 Channel Coding over Continuous Channels

Capacity of multiple-input multiple-output (MIMO) systems in wireless communications

On the Required Accuracy of Transmitter Channel State Information in Multiple Antenna Broadcast Channels

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory

On Two-user Fading Gaussian Broadcast Channels. with Perfect Channel State Information at the Receivers. Daniela Tuninetti

Parallel Additive Gaussian Channels

Capacity Pre-log of Noncoherent SIMO Channels via Hironaka s Theorem

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Multi-Input Multi-Output Systems (MIMO) Channel Model for MIMO MIMO Decoding MIMO Gains Multi-User MIMO Systems

Tight Lower Bounds on the Ergodic Capacity of Rayleigh Fading MIMO Channels

Revision of Lecture 4

Binary Transmissions over Additive Gaussian Noise: A Closed-Form Expression for the Channel Capacity 1

Capacity of Block Rayleigh Fading Channels Without CSI

On the Capacity Region of the Gaussian Z-channel

On the Duality between Multiple-Access Codes and Computation Codes

Lecture 18: Gaussian Channel

On the Capacity of Free-Space Optical Intensity Channels

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Towards control over fading channels

Coding over Interference Channels: An Information-Estimation View

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

Lecture 22: Final Review

On the Relation between Outage Probability and Effective Frequency Diversity Order

Capacity Analysis of Multiple-Access OFDM Channels

Discrete Memoryless Channels with Memoryless Output Sequences

Nearest Neighbor Decoding in MIMO Block-Fading Channels With Imperfect CSIR

ELEC546 MIMO Channel Capacity

Note that the new channel is noisier than the original two : and H(A I +A2-2A1A2) > H(A2) (why?). min(c,, C2 ) = min(1 - H(a t ), 1 - H(A 2 )).

Diversity-Multiplexing Tradeoff of the Two-User Interference Channel

Chapter 9 Fundamental Limits in Information Theory

ECE Information theory Final (Fall 2008)

On the Secrecy Capacity of the Z-Interference Channel

Feedback Capacity of the Gaussian Interference Channel to Within Bits: the Symmetric Case

Title. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

On the Low-SNR Capacity of Phase-Shift Keying with Hard-Decision Detection

EE 5407 Part II: Spatial Based Wireless Communications

Lecture 4 Capacity of Wireless Channels

On the Rate-Limited Gelfand-Pinsker Problem

Diversity Combining Techniques

Lecture 4 Capacity of Wireless Channels

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Principles of Coded Modulation. Georg Böcherer

Single-User MIMO systems: Introduction, capacity results, and MIMO beamforming

Threshold Optimization for Capacity-Achieving Discrete Input One-Bit Output Quantization

The Water-Filling Game in Fading Multiple Access Channels

Approximate Capacity of Fast Fading Interference Channels with no CSIT

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

12.4 Known Channel (Water-Filling Solution)

Achievability of Nonlinear Degrees of Freedom in Correlatively Changing Fading Channels

Ergodic and Outage Capacity of Narrowband MIMO Gaussian Channels

Limits on classical communication from quantum entropy power inequalities

PROMPTED by recent results suggesting possible extraordinary

A Systematic Approach for Interference Alignment in CSIT-less Relay-Aided X-Networks

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

Transcription:

EE 4TM4: Digital Communications II Scalar Gaussian Channel I. DIFFERENTIAL ENTROPY Let X be a continuous random variable with probability density function (pdf) f(x) (in short X f(x)). The differential entropy of X is defined as For example, if X Unif[a,b], then As another example, if X N(µ,σ ), then h(x) = f(x)logf(x)dx. h(x) = log(b a). h(x) = log(πeσ ). The differential entropy h(x) is a concave function of f(x). However, unlike entropy it is not always nonnegative, and hence should not be interpreted directly as a measure of information. The differential entropy is invariant under translation but not under scaling. Translation: For any constant a, h(x +a) = h(x). Scaling: For any nonzero constant a, h(ax) = h(x)+log a. The maximum differential entropy of a continuous random variable X f(x) under the average power constraint E(X ) P is max f(x):e(x ) P h(x) = log(πep), and is attained when X is Gaussian with zero mean and power P, i.e., X N(0,P). Thus, for any X f(x), h(x) = h(x E(X)) log(πevar(x)). February, 06

Proof: Let g be any density with variance P and φ be a zero-mean Gaussian density with variance P. We have 0 D(g φ) = glog(g/φ) = h(g) glogφ = h(g) φlogφ = h(g)+h(φ), where the substitution glogφ = φlogφ follows from the fact that g and φ yield the same variance (note that logφ = ax +b). Let X F(x) be an arbitrary random variable and Y {X = x} f(y x) be continuous for every x. The conditional differential entropy h(y X) of Y given X is defined as h(y X) = E X,Y (logf(y X)). Note that I(X;Y) = h(y) h(y X). Since I(X;Y) 0, it follows that h(y X) h(y). all t. II. ADDITIVE WHITE GAUSSIAN NOISE CHANNEL Channel model: Y(t) = X(t)+N(t), where {N(t)} t is an i.i.d. Gaussian process with N(t) N(0,σ N ) for Theorem : The capacity of additive white Gaussian noise channel with average power constraint P is given by C AWGN = log P +σ N = log(+snr). The achievability part follows by discretizing the channel input and output and by a weak convergence argument. Proof of the converse. First note that the proof of the converse for the DMC with input cost constraint applies to arbitrary (not necessarily discrete) memoryless channels. Therefore, we have Now for any X F(x) with E[X ] P, C AWGN I(X;Y) = h(y) h(y X) = h(y) h(n X) = h(y) h(n) sup I(X;Y). F(x):E[X ] P log(πe(p +σ N)) log(πeσ N) = log P +σ N, February, 06

3 where the inequality becomes equality if X N(0,P). III. SCALAR FADING CHANNELS: ERGODIC CAPACITY Channel model: Y(t) = H(t)X(t) + N(t), where {H(t)} t is a stationary and ergodic process with stationary distribution p(h), {N(t)} t is an i.i.d. Gaussian process with N(t) N(0, ) for all t. ) {H(t)} t is known at the receiver: (coherent channel capacity) C = Define the average SNR at the receiver as max I(X;Y H) E[X ] P = max E HI(X;Y H) E[X ] P = E H log + H P. SNR = E H[H ]P σ N By Jensen s inequality, E H log + H P ( log + E H[H ) ]P = log(+snr). Bad news: the capacity of fading channel is smaller than that of the AWGN channel with the same receiver SNR. In the low SNR regime, we have E H log + H P ( H E P H At high SNR, E H log + H P. ) log e = SNRlog e C AWGN. ( H ) E P H log = logsnr+ H E H log E H [H ] C AWGN + H E H log E H [H. ] ) {H(t)} t is known at the transmitter and the receiver: (dynamic power control and opportunistic communication) C = Define E[X (h)] = P(h). We have The maximizer P (h) is max I(X;Y H) E[X (H)] P max E H maxi(x;y H = h) = max E[X (h)] P X(h) = max E H maxi(x;y H = h) E[X (H)] P X(h) P (h) = max λ σ N h,0 E HP(H) P E H log + H P(H). February, 06

4 with λ given by Therefore, C = E H [P (H)] = P. ( h ) λ p(h)log dh h =σ Nλ / In the high SNR regime, λ is very large and P (h) λ. Therefore the capacity converges to that of the case with no state information at transmitter. Suppose the channel gain h has a peak value Gmax. At low SNR, we have P(h Gmax) λ σ N = P Gmax and thus ( E H log + H P ) (H) σ N GmaxP P(h Gmax)log + P(h Gmax) G maxp log e = SNR G max E H [H ] log e G max E H [H ] C AWGN. Good news: the capacity gain over the AWGN channel depends on the peakness of the fading process, which can be unbounded. Since the capacity scales linearly with the power at low SNR ( log( + SNR) SNRlog e) and logrithmically at high SNR ( log( + SNR) log(snr)), power control is more effective in the low SNR regime. We will see that at high SNR, a more efficient method is to increase the degree of freedom, say, using multiple antennas. When proving the capacity formula for the case where the state process is know at both the transmitter and receiver, we adopted the multiplexing technique, i.e., using different coding schemes for different channel states. But such a scheme may cause long decoding delay. Furthermore, for fading processes with continuous amplitude, there are essentially infinite states, which make the multiplexing method infeasible. Fortunately, for fading channels with additive Gaussian noise, the multiplexing method can be replaced by dynamic power control. That is because we if assume the fading process is { P (H(t))H(t)} t and the power constraint is E[X ], then the capacity with the state information at the receiver is exactly E H log (+ P (H)H ). Since P (H(t)) is a deterministic function of H(t), the receiver can compute { P (H(t))H(t)} t from the fading process {H(t)} t. Out assumption is thus valid. So in order to achieve the capacity, we only need use the power control policy P (h) and a coding scheme with power constraint E[X ] designed for the effective fading process { P (H(t))H(t)} t. Note: the resulting transmission power is E H [P (H)] = P. σ N February, 06

5 REFERENCES [] T. Cover and J. A. Thomas, Elements of Information Theory, nd Edition. Hoboken, NJ: Wiley, 006. [] A. El Gamal and Y.-H. Kim, Network Information Theory. Cambridge, U.K.: Cambridge University Press, 0. [3] D. Tse and P. Viswanath, Fundamentals of Wireless Communication, Cambridge, U.K.: Cambridge University Press, 005. February, 06