Lecture 9: Converse of Shannon s Capacity Theorem

Similar documents
Lecture 3: Shannon s Theorem

Lecture 14 (03/27/18). Channels. Decoding. Preview of the Capacity Theorem.

Eigenvalues of Random Graphs

find (x): given element x, return the canonical element of the set containing x;

Randomness and Computation

Managing Capacity Through Reward Programs. on-line companion page. Byung-Do Kim Seoul National University College of Business Administration

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Lecture 10: May 6, 2013

Learning Theory: Lecture Notes

PARTIAL QUOTIENTS AND DISTRIBUTION OF SEQUENCES. Department of Mathematics University of California Riverside, CA

Supplementary Material for Spectral Clustering based on the graph p-laplacian

Lecture 5 Decoding Binary BCH Codes

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

The Number of Ways to Write n as a Sum of ` Regular Figurate Numbers

A NOTE ON THE DISCRETE FOURIER RESTRICTION PROBLEM

TAIL BOUNDS FOR SUMS OF GEOMETRIC AND EXPONENTIAL VARIABLES

arxiv: v1 [math.co] 1 Mar 2014

An application of generalized Tsalli s-havrda-charvat entropy in coding theory through a generalization of Kraft inequality

Lecture 3 January 31, 2017

Genericity of Critical Types

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Lecture 3. Ax x i a i. i i

On the Connectedness of the Solution Set for the Weak Vector Variational Inequality 1

Edge Isoperimetric Inequalities

Excess Error, Approximation Error, and Estimation Error

Computational and Statistical Learning theory Assignment 4

18.1 Introduction and Recap

COS 511: Theoretical Machine Learning

Bernoulli Numbers and Polynomials

Derivatives of Value at Risk and Expected Shortfall

Error Probability for M Signals

1 Definition of Rademacher Complexity

Refined Coding Bounds for Network Error Correction

Joint Decoding of Content-Replication Codes for Flash Memories

Advanced Topics in Optimization. Piecewise Linear Approximation of a Nonlinear Function

1 The Mistake Bound Model

Non-Ideality Through Fugacity and Activity

What Independencies does a Bayes Net Model? Bayesian Networks: Independencies and Inference. Quick proof that independence is symmetric

Algorithms for factoring

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora

Feature Selection: Part 1

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

Spectral Graph Theory and its Applications September 16, Lecture 5

Lecture Space-Bounded Derandomization

The Expectation-Maximization Algorithm

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

First day August 1, Problems and Solutions

Some congruences related to harmonic numbers and the terms of the second order sequences

STEINHAUS PROPERTY IN BANACH LATTICES

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Week 2. This week, we covered operations on sets and cardinality.

Online Classification: Perceptron and Winnow

SUPPLEMENT TO ROP: MATRIX RECOVERY VIA RANK-ONE PROJECTIONS 1

Section 8.3 Polar Form of Complex Numbers

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

A random variable is a function which associates a real number to each element of the sample space

1 Convex Optimization

The Order Relation and Trace Inequalities for. Hermitian Operators

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Maximizing the number of nonnegative subsets

Chapter 1. Probability

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

Introduction to Random Variables

Appendix B. Criterion of Riemann-Stieltjes Integrability

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

Self-complementing permutations of k-uniform hypergraphs

1 Bref Introducton Ths memo reorts artal results regardng the task of testng whether a gven bounded-degree grah s an exander. The model s of testng gr

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

6) Derivatives, gradients and Hessian matrices

EXPONENTIAL AND MOMENT INEQUALITIES FOR U-STATISTICS

Lecture 4: November 17, Part 1 Single Buffer Management

First Year Examination Department of Statistics, University of Florida

11 Tail Inequalities Markov s Inequality. Lecture 11: Tail Inequalities [Fa 13]

Complex Numbers, Signals, and Circuits

A Mathematical Theory of Communication. Claude Shannon s paper presented by Kate Jenkins 2/19/00

Finding Dense Subgraphs in G(n, 1/2)

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

Foundations of Arithmetic

Vapnik-Chervonenkis theory

Matching Dyadic Distributions to Channels

Lecture 4: Proof of Shannon s theorem and an explicit code

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Dimensionality Reduction Notes 1

INTEGRAL p-adic HODGE THEORY, TALK 14 (COMPARISON WITH THE DE RHAMWITT COMPLEX)

Determinants Containing Powers of Generalized Fibonacci Numbers

6.842 Randomness and Computation February 18, Lecture 4

Lecture Notes on Linear Regression

Lecture 19. Endogenous Regressors and Instrumental Variables

atri/courses/coding-theory/book/

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

More metrics on cartesian products

Problem Set 9 Solutions

Lecture 14: Bandits with Budget Constraints

APPENDIX A Some Linear Algebra

Numerical Algorithms for Visual Computing 2008/09 Example Solutions for Assignment 4. Problem 1 (Shift invariance of the Laplace operator)

DECOMPOSITION OF SPACES OF DISTRIBUTIONS INDUCED BY HERMITE EXPANSIONS

Transcription:

Error Correctng Codes: Combnatorcs, Algorthms and Alcatons (Fall 2007) Lecture 9: Converse of Shannon s Caacty Theorem Setember 17, 2007 Lecturer: Atr Rudra Scrbe: Thanh-Nhan Nguyen & Atr Rudra In the last lecture, we stated Shannon s caacty theorem for the BSC, whch we restate here: Theorem 0.1. Let 0 < 1/2 be a real number. For every 0 < ε 1/2, the followng statements are true for large enough nteger n: () There exsts a real δ > 0, an encodng functon E : {0, 1} k {0, 1} n, and a decodng functon D : {0, 1} n {0, 1} k, where k (1 H( + ε))n such that the followng holds for every m {0, 1} k : P r [D(E(m) + e) m] 2 δn. nose e of BSC () If k (1 H() + ε)n then for every encodng and decodng functons E : {0, 1} k {0, 1} n and D : {0, 1} n {0, 1} k the followng s true for some m {0, 1} k : P r [D(E(m) + e) m] 1/2. nose e of BSC In today s lecture, we wll rove art () of Theorem 0.1. 1 Prelmnares Before we begn wth the roof we wll need a few results, whch we dscuss frst. 1.1 Chernoff Bound Chernoff bound states a bound on the tal of a certan dstrbuton that wll be useful for us. Here we state the verson of the Chernoff bound that we wll need. Prooston 1.1. For 1,, n, let X be a bnary random varable that takes a value of 1 wth robablty and a value of 0 wth robablty 1. Then the followng bounds are true: () P r [ n 1 X (1 + ε)n] e ε2 n/3 () P r [ n 1 X (1 ε)n] e ε2 n/3 Note that the exectaton of the sum n 1 X s n. The bound above states that the robablty mass s tghtly concentrated around the mean. 1

1.2 Volume of Hammng Balls We wll also need good uer and lower bounds on the volume of a Hammng ball. Recall that V ol q (0, n) B q (0, ρn) n ( n ) (q 1). We wll rove the followng result: Prooston 1.2. Let q 2 be an nteger and 0 1 1 q () V ol q (0, n) q Hq()n () V ol q (0, n) q Hq()n o(n) where recall that H q (x) x log q (q 1) x log q x (1 x) log q (1 x). be a real. Then for large enough n: Proof. We start wth the roof of (). Consder the followng sequence of relatons: 1 ( + (1 )) n n (1 ) n (1) n (1 ) n (2) n ( ) ( ) n (q 1) (1 ) n q 1 n ( ( ) n )(q 1) (1 ) n (q 1)(1 ) n ( ( ) n n )(q 1) (1 ) n (3) (q 1)(1 ) n ( ) n (q 1) (1 ) (1 )n. (4) q 1 In the above, (1) follows from the bnomal exanson. (2) follows by drong some terms from the summaton and (3) follows from that facts that 1 (as q 2 and 1/2) and (q 1)(1 ) n 1 (for large enough ( n). Rest of the stes follow from rearrangng the terms. n As q Hq()n q 1) (1 ) (1 )n, (4) mles that 1 V ol q (0, n)q Hq()n, whch roves (). We now turn to the roof of art (). For ths art, we wll need Strlng s aroxmaton for n! ( n ) n 2πn e λ 1 (n) < n! < ( n ) n 2πn e λ 2 (n), e e 2

where 1 λ 1 (n) 12n + 1 and λ 2(n) 1 12n. By the Strlng s aroxmaton, we have the followng nequalty: n! n (n)!((1 )n)! (n/e) n > (n/e) n ((1 )n/e) 1 e λ 1(n) λ 2 (n) λ 2 ((1 )n) (1 )n 2π(1 )n 1 l(n), (5) n (1 ) (1 )n where l(n) eλ 1 (n) λ 2 (n) λ 2 ((1 )n) 2π(1 )n. Now consder the followng sequence of relatons that comlete the roof: V ol q (0, n) (q 1) n (6) n (q 1) n > l(n) (7) n (1 ) (1 )n q Hq()n o(n). (8) In the above (6) follows by only lookng at one term. (7) follows from (5) whle (8) follows from the defnton of H q ( ) and the fact that for large enough n, l(n) s q o(n). 2 Converse of Shannon s Caacty Theorem for BSC We wll now rove art () of Theorem 0.1: the roof of the other art wll be done n the next lecture. Frst, we note that there s nothng to rove f 0, so for the rest of the roof we wll assume that > 0. For the sake of contradcton, assume that the followng holds for every m {0, 1} k : P r [D(E(m) + e) m] 1/2. nose e of BSC Fx an arbtrary message m {0, 1} k. Defne D m to be the set of receved words that are decoded to m by D, that s, D m {y D(y) m}. Note that by our assumton, the followng s true (where from now on we omt the exlct deendence of the robablty on the BSC nose for clarty): P r [E(m) + e D m ] 1/2. (9) 3

Further, by the Chernoff bound, P r[e(m) + e S m ] 2 Ω(γ2 n), (10) where S m s the shell of radus [(1 γ)n, (1 + γ)n] around E(m), that s, S m B 2 (E(m), (1 + γ)n) \ B 2 (E(m), (1 γ)n). (We wll set γ > 0 n terms of ε and at the end of the roof.) (9) and (10) along wth the unon bound mly the followng: P r [E(m) + e D m S m ] 1 2 2 Ω(γ2 n) 1 4, (11) where the last nequalty holds for large enough n. Next we uer bound the robablty above to obtan a lower bound on D m S m. It s easy to see that where P r [E(m) + e D m S m ] D m S m max, max max P r[e(m) + e y] max d (1 ) n d. y S m d [(1 γ)n,(1+γ)n] It s easy to check that d (1 ) n d s decreasng n d for 1/2. Thus, we have ( ) γn ( ) γn 1 1 max (1 γ)n (1 ) n (1 γ)n n (1 ) (1 )n 2 nh(). Thus, we have shown that ( ) γn 1 P r [E(m) + e D m S m ] D m S m 2 nh(), whch by (11) mles that D m S 1 ( ) γn 1 4 2 nh(). (12) Next, we consder the followng sequence of relatons: 2 n D m (13) m {0,1} k D m S m {0,1} k 1 ( ) γn 1 (14) 4 m {0,1} k 2 H()n 2 k 2 H()n γ log(1/ 1)n 2 > 2 k+h()n εn. (15) 4

In the above (13) follows from the fact that for m 1 m 2, D m1 and D m2 are dsjont. (14) ε follows from (12). (15) follows for large enough n and f we ck γ 2 log( 1 1). (Note that as 0 < < 1/2, γ Θ(ε).) (15) mles that k < (1 H() + ε)n, whch s a contradcton. The roof of art () of Theorem 0.1 s comlete. Remark 2.1. It can be verfed that the roof above can also work f the decodng error robablty s bounded by 2 βn (nstead of the 1/2 n art () of Theorem 0.1) for small enough β β(ε) > 0. 5