Tutorial 4. b a. h(f) = a b a ln 1. b a dx = ln(b a) nats = log(b a) bits. = ln λ + 1 nats. = log e λ bits. = ln 1 2 ln λ + 1. nats. = ln 2e. bits.

Similar documents
X Z Y Table 1: Possibles values for Y = XZ. 1, p

1 Probability Density Functions

Math 426: Probability Final Exam Practice

8 Laplace s Method and Local Limit Theorems

Continuous Random Variables

5 Probability densities

Joint distribution. Joint distribution. Marginal distributions. Joint distribution

Fundamental Theorem of Calculus

Lecture 3 Gaussian Probability Distribution

1 The Riemann Integral

Review of Calculus, cont d

Chapter 5 : Continuous Random Variables

Lecture 14: Quadrature

Best Approximation. Chapter The General Case

Jackson 2.26 Homework Problem Solution Dr. Christopher S. Baird University of Massachusetts Lowell

Expectation and Variance

CS 109 Lecture 11 April 20th, 2016

38.2. The Uniform Distribution. Introduction. Prerequisites. Learning Outcomes

The Wave Equation I. MA 436 Kurt Bryan

CS667 Lecture 6: Monte Carlo Integration 02/10/05

We know that if f is a continuous nonnegative function on the interval [a, b], then b

Theoretical foundations of Gaussian quadrature

Lecture 21: Order statistics

= a. P( X µ X a) σ2 X a 2. = σ2 X

7 - Continuous random variables

Normal Distribution. Lecture 6: More Binomial Distribution. Properties of the Unit Normal Distribution. Unit Normal Distribution

Math 120 Answers for Homework 13

Problem Set 3

Review of Gaussian Quadrature method

1B40 Practical Skills

20 MATHEMATICS POLYNOMIALS

n f(x i ) x. i=1 In section 4.2, we defined the definite integral of f from x = a to x = b as n f(x i ) x; f(x) dx = lim i=1

MATH362 Fundamentals of Mathematical Finance

Section 11.5 Estimation of difference of two proportions

Math Solutions to homework 1

Partial Derivatives. Limits. For a single variable function f (x), the limit lim

Math 360: A primitive integral and elementary functions

Solution for Assignment 1 : Intro to Probability and Statistics, PAC learning

Pi evaluation. Monte Carlo integration

Method: Step 1: Step 2: Find f. Step 3: = Y dy. Solution: 0, ( ) 0, y. Assume

Math 1B, lecture 4: Error bounds for numerical methods

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

Problem. Statement. variable Y. Method: Step 1: Step 2: y d dy. Find F ( Step 3: Find f = Y. Solution: Assume

MIXED MODELS (Sections ) I) In the unrestricted model, interactions are treated as in the random effects model:

Definition of Continuity: The function f(x) is continuous at x = a if f(a) exists and lim

Non-Linear & Logistic Regression

7.2 The Definite Integral

MATH 144: Business Calculus Final Review

The final exam will take place on Friday May 11th from 8am 11am in Evans room 60.

1 Techniques of Integration

3.4 Numerical integration

SUMMER KNOWHOW STUDY AND LEARNING CENTRE

Physics 116C Solution of inhomogeneous ordinary differential equations using Green s functions

The Regulated and Riemann Integrals

Discrete Mathematics and Probability Theory Spring 2013 Anant Sahai Lecture 17

The use of a so called graphing calculator or programmable calculator is not permitted. Simple scientific calculators are allowed.

Math 113 Exam 1-Review

F (x) dx = F (x)+c = u + C = du,

Binary Rate Distortion With Side Information: The Asymmetric Correlation Channel Case

Improper Integrals. Type I Improper Integrals How do we evaluate an integral such as

INTRODUCTION TO INTEGRATION

DISCRETE MATHEMATICS HOMEWORK 3 SOLUTIONS

We partition C into n small arcs by forming a partition of [a, b] by picking s i as follows: a = s 0 < s 1 < < s n = b.

Polynomials and Division Theory

f(a+h) f(a) x a h 0. This is the rate at which

1 The fundamental theorems of calculus.

Recitation 3: More Applications of the Derivative

Homework Problem Set 1 Solutions

Chapter 2 Fundamental Concepts

Read section 3.3, 3.4 Announcements:

LECTURE NOTE #12 PROF. ALAN YUILLE

Solution to HW 4, Ma 1c Prac 2016

4.1. Probability Density Functions

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 7

4 7x =250; 5 3x =500; Read section 3.3, 3.4 Announcements: Bell Ringer: Use your calculator to solve

Section 17.2 Line Integrals

CAAM 453 NUMERICAL ANALYSIS I Examination There are four questions, plus a bonus. Do not look at them until you begin the exam.

Definite integral. Mathematics FRDIS MENDELU

MORE FUNCTION GRAPHING; OPTIMIZATION. (Last edited October 28, 2013 at 11:09pm.)

The Fundamental Theorem of Calculus. The Total Change Theorem and the Area Under a Curve.

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

Hybrid Digital-Analog Coding for Interference Broadcast Channels

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 17

5.4, 6.1, 6.2 Handout. As we ve discussed, the integral is in some way the opposite of taking a derivative. The exact relationship

An instructive toy model: two paradoxes

First midterm topics Second midterm topics End of quarter topics. Math 3B Review. Steve. 18 March 2009

Definite integral. Mathematics FRDIS MENDELU. Simona Fišnarová (Mendel University) Definite integral MENDELU 1 / 30

Lecture 6: Singular Integrals, Open Quadrature rules, and Gauss Quadrature

Integrals along Curves.

Consequently, the temperature must be the same at each point in the cross section at x. Let:

Abstract inner product spaces

Monte Carlo method in solving numerical integration and differential equation

Chapter 1: Fundamentals

1.2. Linear Variable Coefficient Equations. y + b "! = a y + b " Remark: The case b = 0 and a non-constant can be solved with the same idea as above.

Math 135, Spring 2012: HW 7

CHM Physical Chemistry I Chapter 1 - Supplementary Material

Tests for the Ratio of Two Poisson Rates

13.4. Integration by Parts. Introduction. Prerequisites. Learning Outcomes

Math 8 Winter 2015 Applications of Integration

Continuous probability distributions

Transcription:

Tutoril 4 Exercises on Differentil Entropy. Evlute the differentil entropy h(x) f ln f for the following: () The uniform distribution, f(x) b. (b) The exponentil density, f(x) λe λx, x 0. (c) The Lplce density, f(x) λe λ x. (d) The sum of X nd X, where X nd X re independent norml rndom vribles with men µ i nd vrince σi, i,. () Uniform Distribution (b) Exponentil distribution. h(f) b ln b dx h(f) ln(b ) nts log(b ) bits 0 0 λe λx ln λe λx dx ln λ + nts λe λx [ln λ λxdx (c) Lplce density. h(f) log e λ bits λe λ x ln λe λ x dx λe λ x [ln + ln λ λ x ]dx ln ln λ + ln e λ log e λ (d) The sum of two norml distributions. nts bits The sum of two norml rndom vribles is lso norml, so pplying the result derived the clss for the norml distribution, since X + X (µ + µ, σ + σ ), h(f) log πe(σ + σ ) bits. Remrk: If X (µ, σ ), then h(x) log(πeσ ).

. Consider X is continuous rndom vrible defined over intervl [, b]. () Wht is the mximum vlue of h(x)? (b) Wht is the corresponding distribution of X? Let u(x) b be the uniform probbility density function over [, b], nd nd let p(x) be the probbility mss function for X. Then Since D(p u) 0, D(p u) h(x) p(x) log p(x) u(x) dx p(x) log p(x)dx p(x) log log(b ) h(x) h(x) log(b ) p(x) log u(x)dx b dx where the equlity holds when p(x) is the uniform probbility density function over [, b]. 3. Consider dditive chnnel whose input lphbet X {0, ±, ±}, nd whose output Y X + Z, where Z is uniformly distributed over the intervl [, ]. Thus the input of the chnnel is discrete rndom vrible, while the output is continuous. Clculte the cpcity C mx p(x) I(X; Y ) of the chnnel. We cn expnd the mutul informtion nd h(z) log, since Z U(, ). I(X; Y ) h(y ) h(y X) h(y ) h(z) The output Y is sum of discrete nd continuous rndom vrible, nd if the probbility of X re p, p,, p, then the output distribution of Y hs uniform distribution with weight p for 3 Y when X, uniform with weight p for Y 0 when X, uniform with weight p 0 for Y when X 0, uniform with weight p for 0 Y when X, nd uniform with weight p for Y 3 when X. Thus we hve the density function of Y s follows p Y (y) p y [ 3, ) p +p y [, ) p +p 0 y [, 0) p 0 +p y [0, ) p +p y [, ) p y [, 3)

Given tht Y rnges from [ 3, 3], the mximum entropy tht it cn hve is n uniform over this rnge. This cn be chieved if the distribution of X is (/3, 0, /3, 0, /3). Then h(y ) log 6 nd the cpcity of this chnnel is C log 6 log log 3 bits. 4. Suppose tht (X; Y ; Z) re jointly Gussin nd tht X Y Z forms Mrkov chin. Let X nd Y hve correltion coefficient ρ xy nd let Y nd Z hve correltion coefficient ρ yz. Find I(X;Z). ote tht for constnt, h( + X) h(x). Thus, without loss of generlity, we ssume tht the mens of X, Y nd Z re zero. Let Λ ( σ x ρ xz ρ xz be the covrince mtrix of X nd Z where ρ xz is the correltion coefficient between X nd Z. Then we hve σ z ) I(X; Z) h(x) + h(z) h(x, Z) Since (X, Y, Z) re jointly Gussin, X nd Z re individully mrginlly Gussin, nd (X, Z) is jointly Gussin. Thus, we hve ow, We cn conclude tht I(X; Z) h(x) + h(z) h(x, Z) log(πeσ x) + log(πeσ z) log(πe Λ ) log( ρ xz) ρ xz E[XZ] E[E[XZ Y ]] E[E[X Y ]E[Z Y ]] σxρxy E[ σ y Y ]E[ σzρyz σ y Y ] E[ σxσzρxyρyz Y ] σy σ xσ zρ xyρ yz σ y σ xσ zρ xyρ yz σ y ρ xy ρ yz E[Y ] Vr(Y ) I(X; Y ) log( ρ xyρ yz) 3

Remrk: If (X, Y ) is jointly Gussin, the conditionl distribution of X given Y y is s follows. ( X Y y µ x + σ ) x ρ xy (y µ y ), ( ρ σ xy)σx y Exercises on Gussin Chnnel. Let Y nd Y be conditionlly independent nd conditionlly identiclly distributed given X. () Show I(X; Y, Y ) I(X; Y ) I(Y ; Y ). (b) Conclude tht the cpcity of the chnnel X (Y, Y ) is less thn twice the cpcity of the () chnnel X Y. I(X; Y, Y ) H(Y, Y ) H(Y, Y X) H(Y ) + H(Y ) I(Y ; Y ) (H(Y X) + H(Y X, Y )) H(Y ) + H(Y ) I(Y ; Y ) H(Y X) H(Y X) H(Y ) H(Y X) + H(Y ) H(Y X) I(Y ; Y ) I(X; Y ) + I(X; Y ) I(Y ; Y ) I(X; Y ) I(Y ; Y ) (b) The cpcity of the single look chnnel X Y is The cpcity of the chnnel X (Y, Y ) is C mx p(x) I(X; Y ) C mx p(x) I(X; Y, Y ) mx p(x) I(X; Y ) I(Y ; Y ) mx p(x) I(X; Y ) C Hence, the two independent looks cnnot be more thn twice s good s one look.. Consider the ordinry Gussin chnnel with two correlted looks t X, i.e., Y (Y, Y ), where Y X + Z Y X + Z 4

with power constrint P on X, nd (Z, Z ) (0, K), where [ ] ρ K. ρ Find the cpcity C for () ρ (b) ρ 0 (c) ρ It is cler tht the input distribution tht mximizes the cpcity is X (0, P ). Evluting the mutul informtion for the distribution, C mx I(X; Y, Y ) h(y, Y ) h(y, Y X) h(y, Y ) h(z, Z X) h(y, Y ) h(z, Z ) ow since ( [ (Z, Z ) 0, ρ ρ ]), we hve h(z, Z ) log(πe) K log(πe) ( ρ ). Since Y X + Z nd Y X + Z, we hve ( [ (Y, Y ) 0, P + P + ρ P + ρ P + ]), nd Hence the cpcity is h(y, Y ) log(πe) K log(πe) ( ( ρ ) + P ( ρ)). C h(y, Y ) h(z, Z ) ) ( log P +. ( + ρ) () ρ. In this cse, C log( + P ), which is the cpcity of single look chnnel. This is not surprising, since in this cse Y Y. (b) ρ 0. In this cse, C ( log + P ), which corresponds to using twice the power in single look. cpcity of the chnnel X (Y + Y ). The cpcity is the sme s the 5

(c) ρ 0. In this cse, C, which is not surprising since if we dd Y nd Y, we cn recover X Remrk: exctly. X (Y + Y ). The cpcity of the bove chnnel in ll cses is the sme s the cpcity f the chnnel 3. Output power constrint. Consider n dditive white Gussin noise chnnel with n expected output power constrint P. Thus Y X + Z, Z (0, ), Z is independent of X, nd E[Y ] P. Find the chnnel cpcity. C mx I(X; Y ) p(x):e[(x+z) ] P mx h(y ) h(y X) p(x):e[(x+z) ] P mx h(y ) h(z X) p(x):e[(x+z) ] P mx h(y ) h(z) p(x):e[(x+z) ] P Given constrint on the output power of Y, the mximum differentil entropy is chieved by norml distribution, nd we cn chieve this by hve X (0, P ), nd in this cse, C log πep log πe log P. 4. Fding Chnnel. Consider n dditive fding chnnel Y XV + Z, where Z is dditive noise, V is rndom vrible representing fding, nd Z nd V re independent of ech other nd of X. Argue tht knowledge of the fding fctor V improves cpcity by showing I(X; Y V ) I(X; Y ). Expnding I(X; Y, V ) in two wys, we get I(X; Y, V ) I(X; V ) + I(X; Y V ) I(X; Y ) + I(X; V Y ) i.e. I(X; V ) + I(X; Y V ) I(X; Y ) + I(X; V Y ) I(X; Y V ) I(X; Y ) + I(X; V Y ) I(X; Y V ) I(X; Y ) 6

5. Consider the dditive whiter Gussin chnnel Y i X i + Z i where Z i (0, ), nd the input signl hs verge power constrint P. () Suppose we use ll power t time, i.e. E[X ] np nd E[X i ] 0 for i, 3,, n. Find I(X n ; Y n ) mx p(x n ) n where the mximiztion is over ll distributions p(x n ) subject to the constrint E[X ] np nd E[Xi ] 0 for i, 3,, n. (b) Find () nd compre to prt (). E[ n I(X n ; Y n ) mx n i X i ] P n (b) I(X n ; Y n ) I(X ; Y ) mx mx p(x n ) n p(x n ) n log ( + np ) n where the first equlity comes from the constrint tht ll our power, np, be used t time, nd the second equlity comes from tht fct tht given Gussin noise nd power constrint np, I(X; Y ) log( + np ). I(X n ; Y n ) ni(x; Y ) mx mx p(x n ) n p(x n ) n mx I(X; Y ) p(x n ) ( log + P ) where the first equlity comes from the fct tht the chnnel is memoryless. otice tht the quntity in prt () goes to 0 s n while the quntity in prt (b) stys constnt. Remrk: The impulse scheme is suboptiml. 7