An upper bound on l q norms of noisy functions

Size: px
Start display at page:

Download "An upper bound on l q norms of noisy functions"

Transcription

1 Electronic Collouium on Computational Complexity, Report No An upper bound on l norms of noisy functions Alex Samorodnitsky Abstract Let T ɛ, 0 ɛ /, be the noise operator acting on functions on the boolean cube {0, } n. Let f be a nonnegative function on {0, } n and let. We upper bound the l norm of T ɛ f by the average l norm of conditional expectations of f, given sets of roughly ɛ r n variables, where r is an explicitly defined function of. We describe some applications for error-correcting codes and for matroids. In particular, we derive an upper bound on the weight distribution of duals of BEC-capacity achieving binary linear codes. This improves the known bounds on the linear-weight components of the weight distribution of constant rate binary Reed-Muller codes for almost all rates. Introduction This paper considers the well-known problem of uantifying the decrease in the l norm of a function on the boolean cube when this function is acted on by the noise operator. Given a noise parameter 0 ɛ /, the noise operator T ɛ acts on functions on the boolean cube as follows: for f : {0, } n R, T ɛ f at a point x is the expected value of f at y, where y is ɛ-correlated with x. That is, y is a random binary vector whose i th coordinate is x i with probability ɛ and x i with probability ɛ, independently for different coordinates. Namely, T ɛ f x = y {0,} n ɛ y x ɛ n y x fy, where denotes the Hamming distance. We will write f ɛ for T ɛ f, for brevity. Note that f ɛ is a convex combination of shifted copies of f. Hence, the noise operator decreases norms. An effective way to uantify this decrease for l norms is given by the hypercontractive ineuality [3, 7, ] see also [0]: f ɛ f + ɛ. Entropy provides another example of a convex homogeneous functional on nonnegative functions on the boolean cube. For a nonnegative function f let the entropy of f be given by Entf = E f log f E f log E f where the expectation is taken w.r.t. the uniform measure on {0, } n. The entropy of f is closely related to Shannon s entropy of the corresponding distribution f/σf on {0, } n, and similarly the entropy of f ɛ is related to Shannon s entropy of the output of a binary symmetric channel with error probability ɛ on input distributed according to f/σf. The decrease in entropy or, correspondingly the increase in Shannon s entropy after ISSN

2 noise is uantified in the Mrs. Gerber s Lemma [5]: Ent f ɛ n E f φ Entf n E f,, ɛ where φ is an explicitly given function on [0, ] [0, /]. There is a well-known connection between the l norms of a nonnegative function f and its entropy see e.g., [6]: Assume, as we may by homogeneity, that E f =. Then Entf = lim log f. The uantity Ent f = log f is known as the th Renyi entropy of f []. The starting point for this paper is an alternative way to uantify the decrease in entropy after noise given in [3]. To state this result, let us introduce some notation. For 0 λ, let T λ denote a random subset T of [n] in which each element is chosen independently with probability λ. Let Ef T be the conditional expectation of f with respect to T, that is a function on {0, } n defined by Ef T x = E y:y T =x T fx. Then [3]: Ent f ɛ E T ɛ Ent Ef T. In light of the connection between entropy and l norms, it is natural to ask whether can be extended to an ineuality between l norms euivalently, Renyi entropies. The main result of this paper is a positive answer to this uestion. Theorem.: Let f be a nonnegative function on {0, } n. Then, for any > holds log f ɛ E T λ log Ef T, 3 with λ = λ, ɛ = ɛ r, where the exponent r = r is given by the following expression: { 3 r = ln < 4 ln Note that 3 is in fact an extension of, since assuming E f =, dividing both sides by and taking the limit as, recovers. The only thing to observe is that lim r =. Remark.: Since conditional expectation of a function f is a convex combination of shifted copies of f, its norm is smaller than that of f, and hence 3 measures the decrease in l -norm under noise, similarly to. In fact, the proof of 3 follows the approach of [7] to the proof of. In this approach, we view both sides of the corresponding ineuality as functions of ɛ for a fixed and compare the derivatives. Since noise operators form a semigroup it suffices to compare the derivatives at zero, and this is done via an appropriate logarithmic Sobolev ineuality see 7.. Applications Let C {0, } n be a binary linear code. Let r C denote the rank function of the binary matroid defined by C. That is, r C T is the rank of the column submatrix of a generating matrix of C which contains columns indexed by T. Applying 3 to the scaled characteristic function of C, that is f = n C C, gives the following claim, connecting between the linear structure of C and its behavior under noise.

3 Proposition.3: For any 0 λ holds λn E T λ r C T max << log E f ɛ λ/r, for ɛ =. 5 Let us discuss the bound given by 5. First, it seems useful to extend it to the limiting values of as well. Let F λ, = log E f ɛ, for < <. Set F λ, = lim F λ, = Ent f λ and F λ, = lim F λ, = log f λ ln. Then we get λn E T λ r C T max F λ,. For any λ, the three values of F λ, which seem to be easier to understand are F λ,, F λ, and F λ, = log E f. While simple examples λ ln show that F λ, can be both smaller and greater than F λ,, intriguingly we always have F λ, = F λ,, and moreover this uantity is easily expressible in terms of the weight distribution of the dual code C. Lemma.4: Let b 0,..., b n be the weight distribution of C. That is b i = { x C, x = i }, for i = 0,..., n. Then F λ, = F λ, = log b i λ ln i. i=0 We proceed to discuss some applications of 5... Codes achieving capacity for the binary erasure channel This is a family of codes for which we can effectively upper bound the LHS of 5, for λ close to the rate of the code. Recall that the rate of a linear code RC is defined as n log C = dimc n. A code C {0, } n more precisely a family of codes indexed by n achieves capacity for the BEC binary erasure channel if there exist two functions δ, δ which go to zero with n such that for λ = RC + δ the probability of decoding error given by p e = Pr T λ {r C T < RC n} is upperbounded by δ. This immediately implies writing R for RC that λn E T λ r C T λn δ Rn = δ n + δ Rn = on. Hence we have the following corollary of 5. Corollary.5: Let C be a BEC capacity achieving code of rate R. Then there exists λ = R + o such that for all holds F λ, on. The ineuality Ent f λ = F λ, on has been interpreted [], using as indicating that C is well-dispersed in {0, } n in the sense that the output of the binary symmetric channel with error probability ɛ = λ R whose input is a random codeword from C 3

4 is very close in the Kullback-Leibler distance to the uniform distribution. We observe that Corollary.5 can be interpreted as providing increasingly stronger measures of proximity of the channel output to uniform, as noise increases beyond R. The ineuality F λ, on together with Lemma.4 provide the following bound on the components of the weight distribution of a code C which is the dual of a code achieving BEC capacity. Proposition.6: Let C be the dual of a linear code C achieving BEC capacity. Let R = R C be the rate of C. Let b 0,..., b n be the distance distribution of C. Then for all 0 i n holds ln i b i on R In particular, since the dual of a Reed-Muller code is a Reed-Muller code, and since Reed-Muller codes achieve BEC capacity [9], the bound in Proposition.6 holds for Reed-Muller codes. The bounds in the literature [], [8], [4] seem to focus mostly on Reed-Muller codes of rates close to 0 or, or on weights which increase sublinearly in n, but do extend to all weights and to all rates. Comparing with these bounds, it seems that Proposition.6 improves the bounds on {b i }, for i growing linearly in n, for constant-rate Reed-Muller codes of almost all rates: 0 < R < δ, for δ δ < Bounds on weight distribution of linear codes The following more general claim is an immediate corollary of the ineuality λn E T λ r C T F λ, and of Lemma.4. Corollary.7: Let C be a binary linear code and let b 0,..., b n be the distance distribution of the dual code C. Then for any 0 λ and for any 0 i n holds b i λ ln i λn E T λ r C T. 6 This claim gives upper bounds on the components of the weight distribution of C, provided we can upper bound λn E T λ r C T...3 Ranks of random subsets in a binary matroid We give a different way to write the ineuality λn E T λ r C T F λ,, stating it as a lemma since it reuires a simple argument. Lemma.8: Let r C be the rank function of the binary matroid on {,..., n} defined by a generating matrix of a linear code C of length n. Let 0 p and let t = p ln. Then log E S r CS E T rc T. S p T t 4

5 Let the dimension of C be k. Recall that the Tutte polynomial [5] of the matroid defined by C is T C x, y = S {,...,n} x k rcs y S rcs. It is easy to see that an alternative way to write the ineuality of Lemma.8 is p k p n k T C p, + p p t k+ t n k d dy T C t,. t As an immediate implication of Lemma.8 we get the following tail bound. Corollary.9: For any 0 holds P r S p { S r C S E T r C T T t } +. Remark.0: Consider a different approach to obtain tail bounds for the function S r C S. Let fs = S r C S. Then fx fy x y, for any x, y {0, } n here x y stands for the Hamming distance between x and y and hence, by the bounded differences ineuality [4], for any t 0 holds P r S p {fs E T p ft + t} e t /n. Let µ : p E S p fs. By the Margulis-Russo formula [0], µ = p E S p i S fs f S \ {i}. Hence, since f is monotone increasing, µ 0 and µ is increasing. Moreover, since f is supermodular by submodularity of rank, it is easy to see that µ is convex. In particular, since µ0 = 0, we have µt t p µp for t p. Taking everything into account, we have P r S p {fs E T t ft + } e µt µp+ n e t pµp+p p n. Note that this does not seem to recover the bound of Corollary.9 when µt is small, as in the case of BEC capacity achieving codes. To conclude, we consider the special case of graphic matroids. For a graph G = V, E with n edges, let M be the matroid on {,..., n} whose independent sets are forests in G. This is a binary matroid, and Lemma.8 specializes as follows. Corollary.: Let G = V, E be a graph. For a subset of edges S E, let cs denote the number of the connected components in the subgraph V, S. Then log E S +cs t E + E ct. S p T t This paper is organized as follows. We prove Theorem. in Section. All the remaining proofs are in Section 3. Proof of Theorem. We start with a log-sobolev-type ineuality. Recall that the Dirichlet form Ef, g for functions f and g on the boolean cube is defined by Ef, g = E x y x fx fy gx gy. Here y x means that x and y differ in precisely one coordinate. 5

6 Theorem.: Let f be a nonnegative function on {0, } n. Then for any holds E f, f 4r E f n ln f ln Ef T, 7 where r is given in 4. T =n Moreover for < < euality is attained if and only if g is a constant function, and for if and only if there is a collection of subcubes of distance at least from each other such that f is constant on each such subcube. Corollary.: Let δ > 0 be arbitrarily small. For any nonnegative non-constant function f on {0, } n and for any > holds E f, f > 4 r δ E f n ln f ln Ef T. T =n Theorem. is proved below. For now we assume that it holds and proceed with the proof of Theorem.. Observe that the claim of the theorem is immediate for constant functions, since both sides of 3 are loge f. So we may and will assume that f is not constant note that this implies f ɛ is non-constant for all 0 ɛ < /. Fix >. Let δ > 0. We will prove 3 for λ, ɛ = ɛ r δ. Taking δ to zero will then imply 3 for λ = ɛ r as well. The proof proceeds by induction on n. The claim clearly holds for n = 0. Let n > 0. We assume that the claim holds for all dimensions smaller than n, and show that it holds for n as well. For fixed and δ both sides of 3 are functions of f and ɛ. Let F f, ɛ denote the LHS and Gf, ɛ the RHS. Clearly F f, 0 = Gf, 0 = log f. We will argue that if F f, ɛ = Gf, ɛ for some 0 ɛ < / then necessarily F f, ɛ < G f, ɛ here and below the derivatives are taken w.r.t. ɛ. It is easy to see that this implies F f, ɛ Gf, ɛ, for all 0 ɛ /, which is precisely what we need to show. Due to the fact that the noise operators form a semigroup under composition: T ρ T ɛ = T ɛ+ρ ɛρ, it will suffice to compare the derivatives at zero. This comparison is done in the following lemma. Lemma.3: Let g be a nonnegative non-constant function on {0, } n with E g =. Then F g, 0 < G g, 0. Proof: As shown in [7], we have F f, 0 = Ef,f E f. We claim that G f, 0 = r δ n log f T =n log Ef T, 6

7 which will conclude the proof of the lemma by Corollary.. In fact, note that λ0 = and that λ 0 = r δ. Hence we have: G f, 0 = d E dɛ log Ef T = d λ n log f + λ n λ log Ef T = ɛ=0 T λ dɛ ɛ=0 r δ n log f log Ef T, completing the proof. T =n T =n We can now conclude the proof of Theorem.. Assume that F f, ɛ = Gf, ɛ, for some 0 ɛ < /. Consider the functions F f ɛ, ρ and G f ɛ, ρ, for 0 ρ /. We have and F f ɛ, ρ = log f ɛ ρ = log f ɛ+ ɛ ρ = F f, ɛ + ɛ ρ, G f ɛ, ρ = E R λ,ρ log Ef ɛ R E R λ,ρ E log Ef T = T R,T λ,ɛ E log Ef T = E log Ef T = G f, ɛ + ɛ ρ. T λ,ρ λ,ɛ T λ,ɛ+ ɛ ρ The ineuality follows from the induction hypothesis for R [n] and from the assumption F f, ɛ = Gf, ɛ for R = [n]. Hence, by Lemma.3, F f, ɛ = concluding the proof. ɛ F f ɛ, 0 < ɛ G f ɛ, 0 G f, ɛ,. Proof of Theorem. We start with the base case n =. This case is dealt with in the following claim. Proposition.4: Let g be a nonnegative function on a -point space with E g =. Then E g, g 4r E g log g, 8 where r is given in 4. Moreover for < < euality is attained if and only if g is a constant function, and for if and only if g is a constant function, or a mutiple of a characteristic function of a point. 7

8 Proposition.4 is proved below. Here we assume its validity and proceed with the proof of Theorem.. First, we restate the -point ineuality 8 in an euivalent form, with two modifications: replacing g with E g and extending the ineuality by homogeneity to general nonnegative functions. Let g be a nonnegative function on a -point space. Then E g, g 4r E g ln E g E g. We can now extend this ineuality to general n by the following standard argument. Let f be a nonnegative function on {0, } n. Then, denoting by e i the i th unit vector, we have E f, f = E f x f x + e i fx fx + e i. x i= For x and i, let f x,i denote the restriction of f to the -point space x, x + e i. Then the RHS is n i= n x:x i =0 E f x,i, f x,i. By the -point ineuality, this is at least 4r Let θ x,i = i= n E f x,i y:y i =0 E f y,i x:x i =0 E f x.i ln E f x.i E f x,i = 4r i= n x:x i =0 E f x.i lne f x,i E f x.i. Then x:x i =0 θ x,i =. Note also that y:y i =0 E f y,i = n E f. By convexity of ln the RHS above is at least 4r E f E f x,i ln θ x,i x E f x,i = 4r E x f proving 7. ln i= i= x:x i =0 E f E Ef [n] \ {i} = 4r E x f x:x ln i =0 E f x,i n E x f i= = 4r E f n ln f T =n ln Ef T, It remains to consider the cases of euality. Tracking back the conditions for euality in the proof above, it is easy to see that euality holds for f if and only if it holds for all one-dimensional restrictions f x,i. For < < this means that all these restrictions are constant functions, implying f is constant. For this means these restrictions are either constant, or multiples of a characteristic function of a point. It is easy to see e.g., by induction on dimension, that this implies that there is a collection of subcubes of distance at least from each other such that f is constant on each such subcube.. Proof of Proposition.4 There are two cases to consider, < < and. These cases are dealt with in the following subsections, starting with the somewhat easier case. 8

9 .. The case We will show that for a nonnegative function g on a -point space with E g = holds E g, g ln E g ln E g. 9 Observe that euality holds in two cases: if g is a constant- function, a multiple of a characteristic function of a point. We will show that these are the only two cases for which euality holds. Assume, w.l.o.g., that f0 f. Let 0 x, and let f0 = x and f = x. Then 9 becomes: x x x ln x + x x + x ln Setting y = x x + x x, z = x + x, and rearranging, this is the same as y z log z. Note that since x = 0 is one of the cases for which euality holds, we may assume that x > 0. Now, let t = x x. Then t, y = t +t t+, and z = t + t+. Rearranging and simpifying, 9 becomes t + log t + t + t t, for t,. + By convexity of the function a a, we have < r = t+ t +. By concavity of logarithm log r r. We remark that the only case in which euality holds in this ineuality is if r =, that is t =, and this implies g is constant. Hence, it suffices to prove t+ t + t+t + t +. Simplifying and rearranging, this is the same as t + t + t +. Let h be a function on the two-point space with values t and. Then the ineuality above is the same as h h h. Recall that the function F : p ln h /p is convex this is a conseuence of Hölder s ineuality. Rewriting the last ineuality in terms of F, we get F + F F, which is true by the convexity of F. This completes the proof of 9. Tracing back the conditions for euality in 9, it is easy to see that it holds only in the two cases mentioned above. 9

10 .. The case < < We will show that E f, f ln r E f ln E f, and that euality holds iff g is a constant- function. r = 3, 0 Let us first deal with the case g0 = 0. In this case, the ineuality reduces to 3 = 4 3, and it is easy to verify that this is a strict ineuality for < <. Proceeding as in the proof of the first case, using the same notation, and writing r for r, the ineuality 0 is euivalent to y z r log z x. Substituting t = x note that we may assume x = g0 > 0 and rearranging, this is the same as t + r log t t + t + t + r. + Since < t+ t + Euality holds only if t+ t + would suffice to show that r r t+ t +, concavity of the logarithm implies log t+ t + t+t t + + r +. Multiplying both sides by t + gives t+ t +. =, that is t =, which means that g is constant. So, it t+ t + t+t t + + r, which is the same as r t + t + t + r + t +. Writing the above as F t Gt + Ht, it is easy to see that F = G + H and that F = G + H. So it suffices to show F t G t + H t, which amounts to after some simplification and rearranging: or r t + + t 3 + r t t + 3 r t r t t t 3 + t, + t. Substituting y = t+ t and rearranging, this is the same as proving for < y that r y + y r +. Denote the LHS by hy. Then h is proportional to r y 3. Substituting r = r = 3, we see that h is proportional to y 3. Hence h < 0 for all y <, and it suffices to verify the last ineuality for y =. Substituting y = we get an identity in. This completes the proof of 0. Tracing back the conditions for euality, it is easy to see that it holds only if g is a constant function. 0

11 3 Remaining proofs Proof of Proposition.3 We will need some facts from Fourier analysis on the boolean cube [0]. First, recall that for a function f on {0, } n and for a subset T [n] holds E f T = frw R T R, where {W R } are the Walsh-Fourier characters. Recall also that if f = n C C, where C is a linear code, then f = C. We also recall a fact from linear algebra. For T [n], let CT be the subspace { R T, R C }. It is well known and easy to see that dim CT = T r CT, where r C is the dimension of the subset of columns indexed by T in a generating matrix of C. Hence, for x {0, } n : E f T x = { C w R x = T = T r CT if x CT 0 otherwise R C T And hence, E f T = E E f T = T r CT. Using this in 3 gives, for any > that log E fɛ E T r C T = λn E r C T, T λ T λ where λ = ɛ r. Rearranging, we get the claim of the proposition. Proof or Lemma.4 Recall that for a function f on {0, } n and for 0 ɛ / holds f ɛ R = fr ɛ R. Let b 0,..., b n be the distance distribution of C. Then, since ɛ = λln, we have F λ, = R {0,} n f R ɛ R = i=0 ln i b i λ On the other hand, it is easy to see that for f = n C C and for any 0 ɛ / holds f ɛ = fr R {0,} n ɛ R w R = fr R {0,} n ɛ R = f ɛ 0 since f is nonnegative. Hence, recalling ɛ = F λ, = f ɛ = f ɛ 0 = λ ln, R {0,} n fr ɛ R = b i λ ln i. i=0

12 Proof or Lemma.8 Let ɛ = p. Using 3 with = we have, for f = n C C, that E f ɛ = S {0,} n f S ɛ S = S {0,} n f Sp S = E S p E Ef S = E S rcs. S p Hence the claim of the lemma follows from the ineuality λn E T λ r C T F λ, with λ = ɛ r = ɛ ln = p ln. Proof or Corollary. For S E, the matroid rank of S is given by rs = cs i= C i = V cs, where {C i } are the connected components of V, S. Hence, S rs = S + cs V. Substituting this into the claim of Lemma.8 gives the claim of this corollary. Acknowledgments I am are grateful to Or Ordentlich for many very helpful conversations and valuable remarks. References [] E. Abbe, A. Shplika, and A. Wigderson, Reed-Muller Codes for Random Erasures and Errors, IEEE Trans. Information Theory 60: 59-55, 05. [] W. Beckner, Ineualities in Fourier Analysis, Annals of Math., 0975, pp [3] A. Bonami, Etude des coefficients Fourier des fonctions de LpG, Annales de linstitut Fourier, 0 970, [4] S. Boucheron, G. Lugosi, and P. Massart, Concentration ineualities, Oxford University Press, 03. [5] T. H. Brylawski and J. G. Oxley, The Tutte polynomial and its applications, in Matroid Applications, Encyclopedia Math. Appl. 40, N. White, ed., Cambridge University Press, 99. [6] T. Cover and J. Thomas, Elements of Information Theory, Wiley 006. [7] L. Gross, Logarithmic Sobolev ineualities, Amer. J. of Math., 97, 975, pp [8] S. Kudekar, S. Kumar, M. Mondelli, H. D. Pfister, and R. L. Urbanke, Comparing the bit- MAP and block-map decoding thresholds of Reed-Muller codes on BMS channels, ISIT 06.

13 [9] S. Kudekar, S. Kumar, M. Mondelli, H. D. Pfister, E. Sasoglu and R. L. Urbanke, Reed- Muller Codes Achieve Capacity on Erasure Channels, IEEE Trans. Information Theory, 637, [0] R. O Donnel, Analysis of Boolean functions, Cambridge University Press, 04. [] O. Ordentlich, Y. Polyanskiy, and R. Urbanke, personal communication, 06. [] A. Renyi, On the measures of entropy and information, in Proc. 4th Berkeley Symp. on Math. Statist. Probability, Berkeley CA 96, vol., [3] A. Samorodnitsky, On the entropy of a noisy function, IEEE Trans. Information Theory 60: [4] A Shplika and O. Sberlo, Weight Distribution of Reed-Muller Codes with Linear Degrees, manuscript 08. See also O. Sberlo, MSc thesis, WIS 08. [5] A. D. Wyner and J. Ziv, A theorem on the entropy of certain binary seuences and applications: Part I, IEEE Trans. Inform. Theory, vol. 9, no. 6, pp , ECCC ISSN

Reverse Hyper Contractive Inequalities: What? Why?? When??? How????

Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? UC Berkeley Warsaw Cambridge May 21, 2012 Hyper-Contractive Inequalities The noise (Bonami-Beckner) semi-group on { 1, 1} n 1 2 Def 1:

More information

Chapter 9: Basic of Hypercontractivity

Chapter 9: Basic of Hypercontractivity Analysis of Boolean Functions Prof. Ryan O Donnell Chapter 9: Basic of Hypercontractivity Date: May 6, 2017 Student: Chi-Ning Chou Index Problem Progress 1 Exercise 9.3 (Tightness of Bonami Lemma) 2/2

More information

Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels

Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels arxiv:1601.0608v2 [cs.it] 23 May 2016 Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Rüdiger

More information

The cocycle lattice of binary matroids

The cocycle lattice of binary matroids Published in: Europ. J. Comb. 14 (1993), 241 250. The cocycle lattice of binary matroids László Lovász Eötvös University, Budapest, Hungary, H-1088 Princeton University, Princeton, NJ 08544 Ákos Seress*

More information

Lecture 10: Hypercontractivity

Lecture 10: Hypercontractivity CS 880: Advanced Comlexity Theory /15/008 Lecture 10: Hyercontractivity Instructor: Dieter van Melkebeek Scribe: Baris Aydinlioglu This is a technical lecture throughout which we rove the hyercontractivity

More information

FKN Theorem on the biased cube

FKN Theorem on the biased cube FKN Theorem on the biased cube Piotr Nayar Abstract In this note we consider Boolean functions defined on the discrete cube { γ, γ 1 } n equipped with a product probability measure µ n, where µ = βδ γ

More information

Entropic and informatic matroids

Entropic and informatic matroids Entropic and informatic matroids Emmanuel Abbe Abstract This paper studies dependencies between random variables by means of information theoretic functionals such as the entropy and the mutual information.

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

A Remark on Sieving in Biased Coin Convolutions

A Remark on Sieving in Biased Coin Convolutions A Remark on Sieving in Biased Coin Convolutions Mei-Chu Chang Department of Mathematics University of California, Riverside mcc@math.ucr.edu Abstract In this work, we establish a nontrivial level of distribution

More information

Harmonic Analysis on the Cube and Parseval s Identity

Harmonic Analysis on the Cube and Parseval s Identity Lecture 3 Harmonic Analysis on the Cube and Parseval s Identity Jan 28, 2005 Lecturer: Nati Linial Notes: Pete Couperus and Neva Cherniavsky 3. Where we can use this During the past weeks, we developed

More information

TUTTE POLYNOMIALS OF GENERALIZED PARALLEL CONNECTIONS

TUTTE POLYNOMIALS OF GENERALIZED PARALLEL CONNECTIONS TUTTE POLYNOMIALS OF GENERALIZED PARALLEL CONNECTIONS JOSEPH E. BONIN AND ANNA DE MIER ABSTRACT. We use weighted characteristic polynomials to compute Tutte polynomials of generalized parallel connections

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class Error Correcting Codes: Combinatorics, Algorithms and Applications Spring 2009 Homework Due Monday March 23, 2009 in class You can collaborate in groups of up to 3. However, the write-ups must be done

More information

Entropy Power Inequalities: Results and Speculation

Entropy Power Inequalities: Results and Speculation Entropy Power Inequalities: Results and Speculation Venkat Anantharam EECS Department University of California, Berkeley December 12, 2013 Workshop on Coding and Information Theory Institute of Mathematical

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Construction of Polar Codes with Sublinear Complexity

Construction of Polar Codes with Sublinear Complexity 1 Construction of Polar Codes with Sublinear Complexity Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke arxiv:1612.05295v4 [cs.it] 13 Jul 2017 Abstract Consider the problem of constructing a polar

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

On Linear Subspace Codes Closed under Intersection

On Linear Subspace Codes Closed under Intersection On Linear Subspace Codes Closed under Intersection Pranab Basu Navin Kashyap Abstract Subspace codes are subsets of the projective space P q(n), which is the set of all subspaces of the vector space F

More information

Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels

Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels Comparing the bit-map and block-map decoding thresholds of Reed-Muller codes on BMS channels arxiv:1601.06048v3 [cs.it] 9 Jul 2016 Shrinivas Kudekar, Santhosh Kumar, Marco Mondelli, Henry D. Pfister, Rüdiger

More information

An Introduction of Tutte Polynomial

An Introduction of Tutte Polynomial An Introduction of Tutte Polynomial Bo Lin December 12, 2013 Abstract Tutte polynomial, defined for matroids and graphs, has the important property that any multiplicative graph invariant with a deletion

More information

Lecture 11: Polar codes construction

Lecture 11: Polar codes construction 15-859: Information Theory and Applications in TCS CMU: Spring 2013 Lecturer: Venkatesan Guruswami Lecture 11: Polar codes construction February 26, 2013 Scribe: Dan Stahlke 1 Polar codes: recap of last

More information

Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors

Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors Marco Mondelli, S. Hamed Hassani, and Rüdiger Urbanke Abstract Consider transmission of a polar code

More information

Hypercontractivity of spherical averages in Hamming space

Hypercontractivity of spherical averages in Hamming space Hypercontractivity of spherical averages in Hamming space Yury Polyanskiy Department of EECS MIT yp@mit.edu Allerton Conference, Oct 2, 2014 Yury Polyanskiy Hypercontractivity in Hamming space 1 Hypercontractivity:

More information

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees

A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees A necessary and sufficient condition for the existence of a spanning tree with specified vertices having large degrees Yoshimi Egawa Department of Mathematical Information Science, Tokyo University of

More information

Convexity/Concavity of Renyi Entropy and α-mutual Information

Convexity/Concavity of Renyi Entropy and α-mutual Information Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au

More information

Boolean Functions: Influence, threshold and noise

Boolean Functions: Influence, threshold and noise Boolean Functions: Influence, threshold and noise Einstein Institute of Mathematics Hebrew University of Jerusalem Based on recent joint works with Jean Bourgain, Jeff Kahn, Guy Kindler, Nathan Keller,

More information

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage: Nonlinear Analysis 71 2009 2744 2752 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A nonlinear inequality and applications N.S. Hoang A.G. Ramm

More information

Skew Cyclic Codes Of Arbitrary Length

Skew Cyclic Codes Of Arbitrary Length Skew Cyclic Codes Of Arbitrary Length Irfan Siap Department of Mathematics, Adıyaman University, Adıyaman, TURKEY, isiap@adiyaman.edu.tr Taher Abualrub Department of Mathematics and Statistics, American

More information

Investigation of the Elias Product Code Construction for the Binary Erasure Channel

Investigation of the Elias Product Code Construction for the Binary Erasure Channel Investigation of the Elias Product Code Construction for the Binary Erasure Channel by D. P. Varodayan A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED

More information

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel Introduction to Coding Theory CMU: Spring 2010 Notes 3: Stochastic channels and noisy coding theorem bound January 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami We now turn to the basic

More information

On the Concentration of the Crest Factor for OFDM Signals

On the Concentration of the Crest Factor for OFDM Signals On the Concentration of the Crest Factor for OFDM Signals Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology, Haifa 3, Israel E-mail: sason@eetechnionacil Abstract

More information

Fourier analysis of boolean functions in quantum computation

Fourier analysis of boolean functions in quantum computation Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge

More information

Error Detection and Correction: Hamming Code; Reed-Muller Code

Error Detection and Correction: Hamming Code; Reed-Muller Code Error Detection and Correction: Hamming Code; Reed-Muller Code Greg Plaxton Theory in Programming Practice, Spring 2005 Department of Computer Science University of Texas at Austin Hamming Code: Motivation

More information

Channel Polarization and Blackwell Measures

Channel Polarization and Blackwell Measures Channel Polarization Blackwell Measures Maxim Raginsky Abstract The Blackwell measure of a binary-input channel (BIC is the distribution of the posterior probability of 0 under the uniform input distribution

More information

Semimatroids and their Tutte polynomials

Semimatroids and their Tutte polynomials Semimatroids and their Tutte polynomials Federico Ardila Abstract We define and study semimatroids, a class of objects which abstracts the dependence properties of an affine hyperplane arrangement. We

More information

A Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones

A Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones A Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones Abstract We present a regularity lemma for Boolean functions f : {, } n {, } based on noisy influence, a measure of how locally correlated

More information

Algebraic matroids are almost entropic

Algebraic matroids are almost entropic accepted to Proceedings of the AMS June 28, 2017 Algebraic matroids are almost entropic František Matúš Abstract. Algebraic matroids capture properties of the algebraic dependence among elements of extension

More information

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family

The Least Degraded and the Least Upgraded Channel with respect to a Channel Family The Least Degraded and the Least Upgraded Channel with respect to a Channel Family Wei Liu, S. Hamed Hassani, and Rüdiger Urbanke School of Computer and Communication Sciences EPFL, Switzerland Email:

More information

CSE 291: Fourier analysis Chapter 2: Social choice theory

CSE 291: Fourier analysis Chapter 2: Social choice theory CSE 91: Fourier analysis Chapter : Social choice theory 1 Basic definitions We can view a boolean function f : { 1, 1} n { 1, 1} as a means to aggregate votes in a -outcome election. Common examples are:

More information

DICTATOR FUNCTIONS MAXIMIZE MUTUAL INFORMATION

DICTATOR FUNCTIONS MAXIMIZE MUTUAL INFORMATION Submitted to the Annals of Applied Probability arxiv: 604.009 DICTATOR FUNCTIONS MAXIMIZE MUTUAL INFORMATION By Georg Pichler, Pablo Piantanida and Gerald Matz TU Wien and CentraleSupélec-CNRS-Université

More information

Arrangements, matroids and codes

Arrangements, matroids and codes Arrangements, matroids and codes first lecture Ruud Pellikaan joint work with Relinde Jurrius ACAGM summer school Leuven Belgium, 18 July 2011 References 2/43 1. Codes, arrangements and matroids by Relinde

More information

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities Maxim Raginsky and Igal Sason ISIT 2013, Istanbul, Turkey Capacity-Achieving Channel Codes The set-up DMC

More information

Generalized Partial Orders for Polar Code Bit-Channels

Generalized Partial Orders for Polar Code Bit-Channels Fifty-Fifth Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 3-6, 017 Generalized Partial Orders for Polar Code Bit-Channels Wei Wu, Bing Fan, and Paul H. Siegel Electrical and Computer

More information

Polar codes for the m-user MAC and matroids

Polar codes for the m-user MAC and matroids Research Collection Conference Paper Polar codes for the m-user MAC and matroids Author(s): Abbe, Emmanuel; Telatar, Emre Publication Date: 2010 Permanent Link: https://doi.org/10.3929/ethz-a-005997169

More information

A Combinatorial Bound on the List Size

A Combinatorial Bound on the List Size 1 A Combinatorial Bound on the List Size Yuval Cassuto and Jehoshua Bruck California Institute of Technology Electrical Engineering Department MC 136-93 Pasadena, CA 9115, U.S.A. E-mail: {ycassuto,bruck}@paradise.caltech.edu

More information

On Extracting Common Random Bits From Correlated Sources on Large Alphabets

On Extracting Common Random Bits From Correlated Sources on Large Alphabets University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 3-2014 On Extracting Common Random Bits From Correlated Sources on Large Alphabets Siu On Chan Elchanan Mossel University

More information

Influences in Product Spaces: KKL and BKKKL Revisited

Influences in Product Spaces: KKL and BKKKL Revisited Influences in Product Spaces: KKL and BKKKL Revisited Ehud Friedgut Abstract The notion of the influence of a variable on a Boolean function on a product space has drawn much attention in combinatorics,

More information

1 Matroid intersection

1 Matroid intersection CS 369P: Polyhedral techniques in combinatorial optimization Instructor: Jan Vondrák Lecture date: October 21st, 2010 Scribe: Bernd Bandemer 1 Matroid intersection Given two matroids M 1 = (E, I 1 ) and

More information

Steiner s formula and large deviations theory

Steiner s formula and large deviations theory Steiner s formula and large deviations theory Venkat Anantharam EECS Department University of California, Berkeley May 19, 2015 Simons Conference on Networks and Stochastic Geometry Blanton Museum of Art

More information

Constructive bounds for a Ramsey-type problem

Constructive bounds for a Ramsey-type problem Constructive bounds for a Ramsey-type problem Noga Alon Michael Krivelevich Abstract For every fixed integers r, s satisfying r < s there exists some ɛ = ɛ(r, s > 0 for which we construct explicitly an

More information

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE) Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X

More information

q-counting hypercubes in Lucas cubes

q-counting hypercubes in Lucas cubes Turkish Journal of Mathematics http:// journals. tubitak. gov. tr/ math/ Research Article Turk J Math (2018) 42: 190 203 c TÜBİTAK doi:10.3906/mat-1605-2 q-counting hypercubes in Lucas cubes Elif SAYGI

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Lecture 2 Linear Codes

Lecture 2 Linear Codes Lecture 2 Linear Codes 2.1. Linear Codes From now on we want to identify the alphabet Σ with a finite field F q. For general codes, introduced in the last section, the description is hard. For a code of

More information

Section 14.1 Vector Functions and Space Curves

Section 14.1 Vector Functions and Space Curves Section 14.1 Vector Functions and Space Curves Functions whose range does not consists of numbers A bulk of elementary mathematics involves the study of functions - rules that assign to a given input a

More information

Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon

Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon Sudeep Kamath and Venkat Anantharam EECS Department University of

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Quantum boolean functions

Quantum boolean functions Quantum boolean functions Ashley Montanaro 1 and Tobias Osborne 2 1 Department of Computer Science 2 Department of Mathematics University of Bristol Royal Holloway, University of London Bristol, UK London,

More information

Information Theoretic Limits of Randomness Generation

Information Theoretic Limits of Randomness Generation Information Theoretic Limits of Randomness Generation Abbas El Gamal Stanford University Shannon Centennial, University of Michigan, September 2016 Information theory The fundamental problem of communication

More information

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications

On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications On Achievable Rates and Complexity of LDPC Codes over Parallel Channels: Bounds and Applications Igal Sason, Member and Gil Wiechman, Graduate Student Member Abstract A variety of communication scenarios

More information

Lectures 6: Degree Distributions and Concentration Inequalities

Lectures 6: Degree Distributions and Concentration Inequalities University of Washington Lecturer: Abraham Flaxman/Vahab S Mirrokni CSE599m: Algorithms and Economics of Networks April 13 th, 007 Scribe: Ben Birnbaum Lectures 6: Degree Distributions and Concentration

More information

The Gaussians Distribution

The Gaussians Distribution CSE 206A: Lattice Algorithms and Applications Winter 2016 The Gaussians Distribution Instructor: Daniele Micciancio UCSD CSE 1 The real fourier transform Gaussian distributions and harmonic analysis play

More information

Asymptotic redundancy and prolixity

Asymptotic redundancy and prolixity Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends

More information

2019 Spring MATH2060A Mathematical Analysis II 1

2019 Spring MATH2060A Mathematical Analysis II 1 2019 Spring MATH2060A Mathematical Analysis II 1 Notes 1. CONVEX FUNCTIONS First we define what a convex function is. Let f be a function on an interval I. For x < y in I, the straight line connecting

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes

Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International

More information

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding. Linear December 2, 2011 References Linear Main Source: Stichtenoth, Henning. Function Fields and. Springer, 2009. Other Sources: Høholdt, Lint and Pellikaan. geometry codes. Handbook of Coding Theory,

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Linear Codes, Target Function Classes, and Network Computing Capacity

Linear Codes, Target Function Classes, and Network Computing Capacity Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:

More information

Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing

Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing arxiv:uant-ph/9906090 v 24 Jun 999 Tomohiro Ogawa and Hiroshi Nagaoka Abstract The hypothesis testing problem of two uantum states is

More information

Diskrete Mathematik und Optimierung

Diskrete Mathematik und Optimierung Diskrete Mathematik und Optimierung Winfried Hochstättler, Robert Nickel: Joins of Oriented Matroids Technical Report feu-dmo009.07 Contact: winfried.hochstaettler@fernuni-hagen.de robert.nickel@fernuni-hagen.de

More information

NOTES ON MATCHINGS IN CONVERGENT GRAPH SEQUENCES

NOTES ON MATCHINGS IN CONVERGENT GRAPH SEQUENCES NOTES ON MATCHINGS IN CONVERGENT GRAPH SEQUENCES HARRY RICHMAN Abstract. These are notes on the paper Matching in Benjamini-Schramm convergent graph sequences by M. Abért, P. Csikvári, P. Frenkel, and

More information

COMMENT ON : HYPERCONTRACTIVITY OF HAMILTON-JACOBI EQUATIONS, BY S. BOBKOV, I. GENTIL AND M. LEDOUX

COMMENT ON : HYPERCONTRACTIVITY OF HAMILTON-JACOBI EQUATIONS, BY S. BOBKOV, I. GENTIL AND M. LEDOUX COMMENT ON : HYPERCONTRACTIVITY OF HAMILTON-JACOBI EQUATIONS, BY S. BOBKOV, I. GENTIL AND M. LEDOUX F. OTTO AND C. VILLANI In their remarkable work [], Bobkov, Gentil and Ledoux improve, generalize and

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Chain Independence and Common Information

Chain Independence and Common Information 1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than

More information

Counting Matrices Over a Finite Field With All Eigenvalues in the Field

Counting Matrices Over a Finite Field With All Eigenvalues in the Field Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

The Poisson Channel with Side Information

The Poisson Channel with Side Information The Poisson Channel with Side Information Shraga Bross School of Enginerring Bar-Ilan University, Israel brosss@macs.biu.ac.il Amos Lapidoth Ligong Wang Signal and Information Processing Laboratory ETH

More information

Partial cubes: structures, characterizations, and constructions

Partial cubes: structures, characterizations, and constructions Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes

More information

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE

4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER /$ IEEE 4488 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 10, OCTOBER 2008 List Decoding of Biorthogonal Codes the Hadamard Transform With Linear Complexity Ilya Dumer, Fellow, IEEE, Grigory Kabatiansky,

More information

Symmetric polynomials and symmetric mean inequalities

Symmetric polynomials and symmetric mean inequalities Symmetric polynomials and symmetric mean inequalities Karl Mahlburg Department of Mathematics Louisiana State University Baton Rouge, LA 70803, U.S.A. mahlburg@math.lsu.edu Clifford Smyth Department of

More information

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.

More information

(a) For an accumulation point a of S, the number l is the limit of f(x) as x approaches a, or lim x a f(x) = l, iff

(a) For an accumulation point a of S, the number l is the limit of f(x) as x approaches a, or lim x a f(x) = l, iff Chapter 4: Functional Limits and Continuity Definition. Let S R and f : S R. (a) For an accumulation point a of S, the number l is the limit of f(x) as x approaches a, or lim x a f(x) = l, iff ε > 0, δ

More information

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation

LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation LDPC Code Ensembles that Universally Achieve Capacity under BP Decoding: A Simple Derivation Anatoly Khina EE Systems Dept., TAU Tel Aviv, Israel Email: anatolyk@eng.tau.ac.il Yair Yona Dept. of EE, UCLA

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 24: Introduction to Submodular Functions Instructor: Shaddin Dughmi Announcements Introduction We saw how matroids form a class of feasible

More information

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble

On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble 5 Conference on Information Sciences and Systems The Johns Hopkins University March 16 18 5 On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble C.-C. Wang S.R. Kulkarni and H.V. Poor

More information

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets

An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets An Alternative Proof of Channel Polarization for Channels with Arbitrary Input Alphabets Jing Guo University of Cambridge jg582@cam.ac.uk Jossy Sayir University of Cambridge j.sayir@ieee.org Minghai Qin

More information

x x 1 x 2 + x 2 1 > 0. HW5. Text defines:

x x 1 x 2 + x 2 1 > 0. HW5. Text defines: Lecture 15: Last time: MVT. Special case: Rolle s Theorem (when f(a) = f(b)). Recall: Defn: Let f be defined on an interval I. f is increasing (or strictly increasing) if whenever x 1, x 2 I and x 2 >

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

Optimal File Sharing in Distributed Networks

Optimal File Sharing in Distributed Networks Optimal File Sharing in Distributed Networks Moni Naor Ron M. Roth Abstract The following file distribution problem is considered: Given a network of processors represented by an undirected graph G = (V,

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Improved log-sobolev inequalities, hypercontractivity and uncertainty principle on the hypercube

Improved log-sobolev inequalities, hypercontractivity and uncertainty principle on the hypercube Improved log-sobolev inequalities, hypercontractivity and uncertainty principle on the hypercube Yury Polyanskiy a, Alex Samorodnitsky b a Department of Electrical Engineering and Computer Science, MIT,

More information

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics Bessel s Equation MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Background Bessel s equation of order ν has the form where ν is a constant. x 2 y + xy

More information

Information Divergences and the Curious Case of the Binary Alphabet

Information Divergences and the Curious Case of the Binary Alphabet Information Divergences and the Curious Case of the Binary Alphabet Jiantao Jiao, Thomas Courtade, Albert No, Kartik Venkat, and Tsachy Weissman Department of Electrical Engineering, Stanford University;

More information