On Extracting Common Random Bits From Correlated Sources on Large Alphabets
|
|
- Joel Quinn
- 5 years ago
- Views:
Transcription
1 University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research On Extracting Common Random Bits From Correlated Sources on Large Alphabets Siu On Chan Elchanan Mossel University of Pennsylvania Joe Neeman Follow this and additional works at: Part of the Computer Sciences Commons, and the Statistics and Probability Commons Recommended Citation On Chan, S., Mossel, E., & Neeman, J. (2014). On Extracting Common Random Bits From Correlated Sources on Large Alphabets. IEEE Transactions on Information Theory, 60 (3), This paper is posted at ScholarlyCommons. For more information, please contact
2 On Extracting Common Random Bits From Correlated Sources on Large Alphabets Abstract Suppose Alice and Bob receive strings X=(X 1,...,X n ) and Y=(Y 1,...,Y n ) each uniformly random in [s] n, but so that X and Y are correlated. For each symbol i, we have that Y i =X i with probability 1-ε and otherwise Y i is chosen independently and uniformly from [s]. Alice and Bob wish to use their respective strings to extract a uniformly chosen common sequence from [s] k, but without communicating. How well can they do? The trivial strategy of outputting the first k symbols yields an agreement probability of (1-ε+ε/s) k. In a recent work by Bogdanov and Mossel, it was shown that in the binary case where s=2 and k=k(ε) is large enough then it is possible to extract k bits with a better agreement probability rate. In particular, it is possible to achieve agreement probability (kε) -1/2 2 -kε/(2(1-ε/2)) using a random construction based on Hamming balls, and this is optimal up to lower order terms. In this paper, we consider the same problem over larger alphabet sizes s and we show that the agreement probability rate changes dramatically as the alphabet grows. In particular, we show no strategy can achieve agreement probability better than (1-ε) k (1+δ(s)) k where δ(s) 0 as s. We also show that Hamming ball-based constructions have much lower agreement probability rate than the trivial algorithm as s. Our proofs and results are intimately related to subtle properties of hypercontractive inequalities. Keywords hamming codes, probability, random processes, 1-ε probability, hamming ball-based constructions, agreement probability rate, common random bit extraction, correlated sources, hypercontractive inequalities, large alphabet size, lower order terms, random construction, trivial algorithm, correlation, information theory, joints, noise, noise measurement, protocols, upper bound, randomness extraction, hypercontractivity, symmetric channels Disciplines Computer Sciences Statistics and Probability This journal article is available at ScholarlyCommons:
3 On extracting common random bits from correlated sources on large alphabets Siu On Chan 2, Elchanan Mossel 1,2, and Joe Neeman 1 1 Department of Statistics, UC Berkeley 2 Department of Computer Science, UC Berkeley January 24, 2014 Abstract Suppose Alice and Bob receive strings X = (X 1,..., X n ) and Y = (Y 1,..., Y n ) each uniformly random in [s] n but so that X and Y are correlated. For each symbol i, we have that Y i = X i with probability 1 ɛ and otherwise Y i is chosen independently and uniformly from [s]. Alice and Bob wish to use their respective strings to extract a uniformly chosen common sequence from [s] k but without communicating. How well can they do? The trivial strategy of outputting the first k symbols yields an agreement probability of (1 ɛ + ɛ/s) k. In a recent work by Bogdanov and Mossel it was shown that in the binary case where s = 2 and k = k(ɛ) is large enough then it is possible to extract k bits with a better agreement probability rate. In particular, it is possible to achieve agreement probability (kɛ) 1/2 2 kɛ/(2(1 ɛ/2)) using a random construction based on Hamming balls, and this is optimal up to lower order terms. In the current paper we consider the same problem over larger alphabet sizes s and we show that the agreement probability rate changes dramatically as the alphabet grows. In particular we show no strategy can achieve agreement probability better than (1 ɛ) k (1 + δ(s)) k where δ(s) 0 as s. We also show that Hamming ball based constructions have much lower agreement probability rate than the trivial algorithm as s. Our proofs and results are intimately related to subtle properties of hypercontractive inequalities. Supported by NSF grant DMS and DOD ONR grant N Supported by NSF grant DMS and DOD ONR grant N Supported by NSF grant DMS and DOD ONR grant N
4 1 Introduction For an integer s 2, consider two [s] n -valued random variables X, Y (where [s] = {0, 1,..., s 1}) which are sampled by first choosing X uniformly and then, independently for every coordinate i, taking Y i to be a copy of X i with probability 1 ɛ and an independent sample from [s] otherwise. We will write P ɛ for this joint distribution on X and Y. Note that X and Y are both uniformly distributed in [s] n. The non-interactive correlation distillation (NICD) is defined as follows: suppose that one party (Alice) receives X and another (Bob) receives Y. Without any communication, each party chooses a string that is uniformly distributed in [s] k with the goal of maximizing the probability that the two strings chosen by Alice and Bob are identical. 1.1 Motivation and Related Work This problem was studied in [1] in the case s = 2, with motivation from various areas. One major motivation comes from the goal of extracting a unique identification string from process variations [3, 12], particularly in a noisy setup [9]. The case where the goal of the two parties is to extract a single bit was studied independently a number of times; in this case the optimal protocol is for the two parties to use the first bit. See [11] for references and for studying the problem of extracting one bit from two correlated sequences with different correlation structures. In [4, 5] a related question is studied: if m parties receive noisy versions of a common random string, where the noise of each party is independent, what is the strategy for the m parties that maximizes the probability that the parties agree on a single random bit of output without communicating? [4] shows that for large m using the majority functions on all bits is superior to using a single bit and [5] uses hypercontractive inequalities to show that for large m, majority is close to being optimal. Both results were recently extended to general string spaces in [6]. For any k N, one protocol which we will call the trivial protocol is for both parties to take the first k symbols of their strings. The success probability of this protocol is (1 (1 1 s )ɛ)k exp( kɛ(1 1 s )). When s = 2 and the protocol outputs a single bit (ie. k = 1), it is known (see e.g. [4]) that the optimal protocol is for both parties to choose the first bit. For larger k, this is no longer true. Bogdanov and Mossel [1] studied the case s = 2, and showed that any protocol which outputs a uniformly random length-k string 2
5 has a success probability of at most exp( kɛ(ln 2)/2). In other words, if p is the success probability of the trivial algorithm for choosing a k-bit string, then every protocol with success probability at least p emits at most k/ ln 2 bits. Bogdanov and Mossel showed that their bound was sharp by providing an example (for a restricted range of ɛ and k) with success probability which, for any δ > 0, is at least exp( kɛ(1 + δ)/2) for small ɛ and large k. In other words, if p is the success probability of the trivial algorithm for choosing a k-bit string, then they gave a protocol that succeeds with probability p and produces a string of length k/((1 + δ) ln 2). Their construction was built by taking random translations of Hamming balls; we will return to it in more detail later. 1.2 Our results We study an extension of the upper bound of [1] to a larger alphabet. In our main result we show that in the case of large alphabets, the constant-factor gap between the upper bound and the performance of the trivial algorithm vanishes; hence, the trivial algorithm is almost optimal for large alphabets. In particular we show no strategy can achieve agreement probability better than (1 ɛ) k (1 + δ(s)) k where δ(s) 0 as s. We then turn to analyze generalizations of the Hamming ball based construction of [1]. Interestingly we show that these have much lower agreement probability rate than the trivial algorithm as s. In this respect it is interesting to compare the case of a large number of parties that extract a single symbol to the case of two parties who extract a longer string. In the first case, the results of [6] generalize those of [4,5] to show that Hamming ball based protocols are almost optimal for all values of s when the number of parties m is large. In the case presented here, Hamming ball type constructions quickly deteriorate as s increases and the trivial protocol becomes almost optimal. The difference between the two phenomena may be explained by the fact that the problem studied in [4,5] is closely related to reverse-hypercontractive inequalities which hold uniformly in s [6], while the problem studied here is closely related to hypercontractive inequalities which deteriorate as s increases. Our results show that the trivial algorithm is optimal up to a factor of (1 + δ(s)) k where δ(s) 0 as s. An interesting open problem is to find an almost optimal algorithm for large s, i.e., an algorithm whose agreement probability is provably optimal up to a factor of 2 o(k). It is quite possible 3
6 that the trivial protocol is optimal for some large fixed values of s and all large enough k. 2 Definitions and results A protocol for NICD is defined by two functions f, g [s] n [s]. Upon receiving their strings X, Y [s] n, the two parties compute f(x) and g(y ) respectively. The protocol is successful if both parties agree on the same output; that is, if f(x) = g(y ). Therefore, finding an optimal NICD algorithm is equivalent to finding functions f, g [s] n [s] which maximize P ɛ (f(x) = g(y )). In the introduction, we mentioned the requirement that f and g are uniformly distributed on [s] k. In fact, we will require less for our negative results and guarantee more in our positive results. In particular, for our negative results, we will only assume that f and g have min-entropy at most k, meaning that P(f(X) = z) s k for all z [s] and similarly for g. Of course, if f [s] n [s] k is uniformly distributed then it has min-entropy k. 2.1 Reduction to a question about sets Using an observation of [1], we can reduce the NICD problem to the problem of finding a sets A [s] n which maximize P ɛ (Y A X A). On the one hand, if we are given good functions f and g then we can find a set A such that P(Y A X A) is large: Theorem 2.1. For any functions f, g [s] n [s] having min-entropy k there is a set A [s] n with A s n k such that for every 0 ɛ 1, P ɛ (Y A X A) P ɛ (f(x) = g(y )). On the other hand, if we have a good set A then we can construct a function f by taking certain translates of A. Theorem 2.2. If A [s] n with 1 8 sn k A 1 4 sn k then there is a function f [s] n [s] k such that 1. f(x) is uniformly distributed on [s] k 2. f(x) is uniformly distributed on [s] k conditioned on f(x) = f(y ) 3. for every 0 ɛ 1, P ɛ (f(x) = f(y )) 1 16 P ɛ(y A X A). 4
7 Note that the f that we produce in Theorem 2.2 satisfies stronger requirement than the one that we require in Theorem 2.1. Indeed, the f from Theorem 2.2 is uniformly distributed instead of only having a small minimum entropy. Moreover, f(x) is uniformly distributed given f(x) = f(y ), which means that a successful execution of the protocol will result in the two parties having uniformly random strings. 2.2 Negative results on the performance of NICD In view of Theorems 2.1 and 2.2, the NICD problem reduces to the study of P ɛ (Y A X A) over sets A [s] n with a given cardinality. Actually, it turns out to be more convenient to normalize the cardinality instead of restricting it: Definition 2.3. For A [s] n, define M ɛ (A) = ln P ɛ(y A X A). ln P(A) To illustrate the definition, consider the set A = {x x 1 = = x k = 0}, which corresponds to the trivial algorithm that selects the first k symbols. In this case, P ɛ (Y A X A) = (1 (1 s 1 )ɛ)) k. Since P(A) = s k, it follows that M ɛ (A) = 1 ln s ln ( 1 1 (1 s 1 )ɛ ). (1) Our main result is that the above example is optimal as s. Theorem 2.4. For every δ, ɛ > 0 there exists S < such that for all n N and all s S, any set A [s] n satisfies M ɛ (A) 1 ln s ( ln 1 1 ɛ δ) Note that since ln P(A) is negative, Theorem 2.4 provides an upper bound on P ɛ (Y A X A) for all sets A of a fixed probability, and therefore an upper bound on the agreement probability of any NICD protocol. We remark that our proof extends to the case where the X i are chosen independently from some distributions whose smallest atoms are at most α. In this case, the theorem holds with s replaced by 1/α. As a corollary of Theorems 2.1 and 2.4, we obtain a bound on the performance of any NICD protocol. 5
8 Corollary 2.5. For any δ, ɛ > 0, there exists S < such that for all n, k N, for any s S, and for any NICD protocol f, g on [s] with min entropy at most k, the probability that the protocol succeeds with noise ɛ is at most (1 ɛ) k e δk. Since the success rate of the trivial protocol with min-entropy k is bigger than (1 ɛ) k, this shows that for large s, no protocol can be succeed with much higher probability than the trivial protocol. Proof. Fix a protocol f, g and let A be a set such that A s n k and P ɛ (Y A X A) P ɛ (f(x) = g(y )) (such an A exists by Theorem 2.1). Then Theorem 2.4 implies (recalling that ln P(A) is negative) ln P ɛ (Y A X A) ln P(A) ( log 1 1 δ) k( log ln s 1 ɛ 1 ɛ δ) Taking the exponential of both sides yields the corollary. Of course, we can also restate Corollary 2.5 for a fixed probability of success and a varying k: Corollary 2.6. For any δ, ɛ > 0, there exists S < such that for all n N, for all 0 < p < 1, for any s S, and for any NICD protocol f, g that succeeds with probability at least p, if k is the min-entropy of the protocol then the trivial protocol on k log(1 ɛ) log(1 ɛ)+δ symbols also succeeds with probability at least p. In other words, for a fixed probability of failure, a trivial protocol can recover almost as many symbols as any other protocol (when s is large). The dependence of S on δ and ɛ is not made explicit in our proof. However, our proof does provide a way to approximate S(δ, ɛ) on a computer; therefore, we produced a plot (Figure 1) showing the approximate value of S for various values of δ and ɛ. 2.3 An example: the Hamming ball As we have already mentioned, [1] showed that when s = 2, the trivial algorithm is optimal up to a constant factor; As we have just seen, this constant factor converges to 1 as s. However, [1] also gave a positive result: they gave an example that achieves optimal performance (at least, up to lower order terms and for a particular range of k and ɛ). Since their example can be generalized to s > 2, we can examine its performance as s, and compare it to the trivial algorithm. 6
9 δ ɛ =0.5 ɛ =0.1 ɛ = S Figure 1: The relationship, in log-log scale, between S and δ in Theorem 2.4 for various values of ɛ: 0.5 (solid), 0.1 (dashed), and 10 3 (dotted). For each of these values of ɛ, every point (s, δ) that is above the corresponding line, and every n N, all sets A [s] n satisfy M ɛ (A) 1 ln s (ln 1 1 ɛ δ). Define the set A s,α,n = {x [s] n #{i x i 0} n s 1 s α n}. In other words, A s,α,n is a Hamming ball around zero of radius n s 1 s α n. When s = 2, [1] showed that M ɛ (A 2,α,n ) ɛ/2 as n, t and ɛ 0 (note that this does not contradict Theorem 2.4, which only holds for sufficiently large s). Since the trivial algorithm has M ɛ (A) ɛ/(2 ln 2) for small ɛ, this shows that the Hamming ball NICD protocol is better than the trivial one for s = 2. The situation reverses, however, as s grows: Proposition 2.7. There exists a constant c such that for any s, α and ɛ, lim M ɛ(a s,α,n ) cɛ. n Since the trivial algorithm has M ɛ (A) ɛ/ ln s, it is better than the Hamming ball protocol when s is large. In terms of the agreement probability, an argument like the proof of Corollary 2.5 shows that the agreement probability of the Hamming ball protocol is at most (1 ɛ) ck ln s. In terms of the number of recovered symbols, the Hamming ball protocol with the same agreement probability as the k-symbol trivial protocol can only recover ck/ ln s symbols. 7
10 3 Reduction to a single set In this section, we will prove Theorems 2.1 and 2.2, which reduce the NICD problem to a question about optimal subsets of [s] n. The proof of Theorem 2.1 is straightforward, and essentially follows directly from the Cauchy- Schwarz inequality. Proof of Theorem 2.1. Suppose that f, g [s] n [s] k have min-entropy k. For z [s] k, let f z [s] n {0, 1} be the function Define g z (x) similarly. Then P ɛ (f(x) = g(y )) = 1, if f(x) = z, f z (x) = 0, otherwise. P ɛ (f(x) = g(y ) = z) z [s] k = Ef z (X)g z (Y ) z [s] k Efz (X)f z (Y ) Eg z (X)g z (Y ) z [s] k Ef z (X)f z (Y ) Eg z (X)g z (Y ), z [s] k z [s] k where both inequalities are Cauchy-Schwarz. For each z [s] k, let A z be the set f 1 (z). Since f has min-entropy k, A z s n k for all z. Let A be the A z which maximizes P ɛ [Y A z X A z ]. Then Ef z (X)f z (Y ) = P ɛ (f(x) = f(y ) = z) z [s] k z [s] k = z [s] k P ɛ (f(x) = z)p ɛ (Y A z X A z ) P ɛ (Y A X A). The idea behind Theorem 2.2 is, given a set A [s] n with 1 8 sn k A 1 4 sn k, to construct a partition of [s] n out of randomly translated copies of A. Let C [s] n, C = s k be the set of centers. We will choose C randomly; we will say how to choose it later. Let f C [s] n C to be some 8
11 function with the property that if x A+c for a unique c C then f C (x) = c. Clearly, then, P ɛ (f C (X) = f C (Y )) P ɛ (!c C such that X, Y A + c). (2) The goal is to find a C which makes the right-hand side large; this will allow us to prove property 3 in the second part of Theorem 2.1. Note, by the way, that it is sufficient to prove Theorem 2.2 with [s] k replaced by an arbitrary set C satisfying C = s k. Since such a C is in bijection with [s] k, the theorem as stated will follow. Lemma 3.1. Suppose that C is chosen (randomly) such that for any a, b [s] n, P(a, b C) = s 2(t n). Then E C P ɛ (f C (X) = f C (Y )) 1 P(ɛ(Y A X A). 16 In particular, there exists a fixed C such that f C Theorem 2.2. satisfies property 3 of Proof. We begin from the right-hand side of (2): P ɛ (!c C such that X, Y A + c) (3) E C P ɛ (X, Y A c ) (1 P ɛ (X or Y A c X, Y A c )) c C c c = s k E c P ɛ (X, Y A c ) (1 (s k 1)E c P ɛ (X or Y A c X, Y A c )). (4) By our assumption on the distribution of C, c c is uniformly random given c. Thus E c P ɛ (X or Y A c X, Y A c ) 2E c P ɛ (X A c X, Y A c ) where the last line follows because A s n k /4. Plugging this into (4), 2P ɛ (X A) s k /2, E C P ɛ (f(x) = f(y )) sk 2 P ɛ(x, Y A) = P ɛ(y A X A). 16 To check properties 2 and 3, we need to be a little more specific about our choice of f C. So far, we have only assumed that f C (x) = c if c is the only member of C with x A + c. Now, take to be some total order 9
12 on [s] n with the property that x y whenever x A, y / A. Then define f C (x) = arg min c C (x c) (where the arg min is taken with respect to the ordering ). This defines f C on all of [s] n, and it has the property that we required before: if f C (x) A + c for a unique c, then f C (x) c A and f C (x) c / A for every c c. By our requirement on, f C (x) c f C (x) c for every c c and so f C (x) = c. Lemma 3.2. If there is a subgroup G ([s] n, +) and some a [s] n such that C = G + a, then f C satisfies properties 1 and 2 of Theorem 2.2. Proof. For any g G, f C (x + g) = arg min c C (x (c g)) = g + arg min c C g (x c) = f C (x) + g, since C g = C. Moreover, note that the distribution of (X, Y ) is invariant under translation, in the sense that for any fixed g [s] n, (X, Y )+g d = (X, Y ). Hence, P(f(X) = c) = P(f(X + g) = c) = P(f(X) = c + g) for any c C, g G. Since G acts transitively on C, this implies that P(f(X) = c) = 1/ C = s k ; in other words, f(x) is distributed uniformly on C. Similarly, P(f(X) = f(y ) = c) = P(f(X) = f(y ) = c + g) for any c C, g G and so P(f(X) = f(y ) = c) = s k P(f(X) = f(y )); in other words, f(x) is uniformly distributed on C conditioned on f(x) = f(y ). Proof of Theorem 2.2. To prove Theorem 2.2, we need to find a set C which satisfies the hypotheses of Lemmas 3.1 and 3.2. In [1], they chose C to be a uniformly random k-dimensional affine subspace of [2] n, but since [s] n is not a vector space for every s, we will need something slightly more complicated. Let s = m i=1 pj i i be the prime factorization of s. By the Chinese remainder theorem, the group ([s] n, +) is isomorphic to m i=1 ([p i] nj i, +); let φ i ([p i ] nj i, +) [s] n be an isomorphism. Independently for each i = 1,..., m and j = 1,..., k i, let G i,j be a uniformly random k-dimensional subspace of [p i ] n (which is a vector space), and let a i,j be a uniformly random element of [p i ] n. Finally, define C = φ( i,j (a i,j + G i,j )) = φ( 10 i,j a i,j ) + φ( G i,j ). i,j
13 Since φ( i,j G i,j ) is a subgroup of [s] n, the condition of Lemma 3.2 is satisfied with probability 1. To check the condition of Lemma 3.1, note that for any b = i,j b i,j and c = i,j c i,j in m i=1 [p i] nk i, P(b i,j, c i,j a i,j + G i,j ) = p 2(n k) i because G i,j is a uniformly random k-dimensional subspace of [p i ] n. Since the a i,j and G i,j are independent, it follows that P(φ(b), φ(c) C) = P (a i,j, b i,j C i,j ) = s 2(n k). i,j That is, the distribution of C satisfies the condition of Lemma 3.1. In particular, there exists a non-random C that belongs to the support of C, and which also satisfies condition 3 of Theorem 2.2. By the previous paragraph, the fact that it belongs to the support of C implies that it also satisfies conditions 1 and 2. 4 An upper bound on agreement The proof of Theorem 2.4 uses a hypercontractive inequality in much the same way as it was used in [1]. The difference here is that [1] used only the hypercontractive inequality over the two-point space with the uniform measure, while we need one that applies to spaces with more than two points. Before stating this hypercontractive inequality, we need to define the appropriate Bonami-Beckner-type operator: for a function g [s] R, and some 0 < τ < 1, define S τ g = τg + (1 τ)eg. Thus, for any 0 < τ < 1, and any 1 p, q, S is an operator L p ([s]) L q ([s]). We define T τ L p ([s] n ) L q ([s]) n by T τ = Sτ n. The operator T τ can also be written in terms of the Fourier expansion of f; see [10] for details. For us, the crucial property of T τ is that E ɛ f(x)f(y ) = E(T τ f) 2 (5) when τ = 1 ɛ. This fact was used in [1] for s = 2 to establish Theorem 2.4 in that case. The following hypercontractive inequality is due to Oleszkiewicz [8]: Theorem 4.1. Fix s N and set α = 1 s, β = 1 α. Define σ(α, p) = ( β2 2/p α 2 2/p 1/2 α 1 2/p β β 1 2/p α ). 11
14 Then for any f [s] n R, if τ σ(α, p) then T τ f 2 f p. We remark that the reason for not having an explicit S(δ) in Theorem 2.4 and its corollaries is that we do not know how to solve for p in terms of σ(α, p). However, an approximate solution can easily be found on a computer, and we used such an approximation to produce Figure 1. To obtain Theorem 2.4, it suffices to study the limit of σ(α, p) as α 0. Essentially, σ 2 (α, p) α 1 2/p for small α, and so if we take p to be slightly larger than what is needed to solve α 1 2/p = 1 ɛ, then we will have σ(α, p) 1 ɛ. This will allow us to apply Theorem 4.1 with τ = 1 ɛ. Lemma 4.2. Let p = p(α, δ, ɛ) solve α (2/p 1) δ/ ln α = 1 ɛ. Then for any δ > 0 and ɛ (0, 1), there is an A(δ, ɛ ) > 0 such that α < A(δ, ɛ ) implies that for all ɛ (0, ɛ ), σ 2 (α, p(α, δ, ɛ)) 1 ɛ. Proof. Note that the definition of p ensures that p < 2 for all α, δ, ɛ. By the definition of σ, σ 2 (α, p)α 1 2/p = β2 2/p α 2 2/p β α 2/p β 1 2/p β2 2/p α 2 2/p. (6) Fix ɛ and δ, and note that as α 0, 2 2/p 1 uniformly for all ɛ (0, ɛ ). Hence, the right-hand side of (6) converges to 1 (uniformly in ɛ) as α 0. Plugging in the definition of p, σ 2 (α, p) 1 ɛ = σ 2 (α, p)α 1 2/p α δ/ ln α (1 o(1))e δ. In particular, the limit of the right hand side is strictly smaller than one, and so σ 2 (α, p) 1 ɛ for sufficiently small α. Proof of Theorem 2.4. Fix ɛ, δ > 0. Let A and p be as in Lemma 4.2 and define S = 1/A. If s S then α = 1/s A and so Lemma 4.2 implies that σ 2 (α, p) 1 ɛ. Thus, (5) and Theorem 4.1 imply that P ɛ (X, Y A) = T 1 ɛ 1 A A 2 p = P(A) 2 p. 12
15 Hence, P ɛ (Y A X A) P(A) 2/p 1. Taking the logarithm and dividing by ln P(A) (which is negative), we have 5 Hamming ball M ɛ (A) = ln P ɛ(x, Y A) ln P(A) 2 p 1 = ln 1 1 ɛ ln s δ ln s. In this section, we consider the example of the Hamming ball A s,α,n consisting of x [s] n such that #{i x i = 0} n s α n. This is an interesting example because [1] showed that if α is sufficiently large (depending on ɛ), then as n, A 2,α,n achieves the upper bound of Theorem 2.4. We will show, however, that this is no longer true for large s. Note that 1 X1 =0 has mean 1 s 1 s and variance. Thus, the Berry-Esséen s 2 theorem implies that for any fixed α and s, P(A s,α,n ) P(Z αs ) (7) s 1 as n, where Z N (0, 1). Moreover, if (Z 1, Z 2 ) N (0, ( 1 1 ɛ 1 ɛ 1 )) then P ɛ (X, Y A s,α,n ) P(Z 1, Z 2 αs ). (8) s 1 In particular, by studying normal probabilities we can use (7) and (8) to compute lim n M ɛ (A s,α,n ). Lemma 5.1. Suppose that (Z 1, Z 2 ) N (0, ( 1 ɛ 1 1 ɛ 1 )). There is a sufficiently small constant c such that for all t > 0 and 0 < ɛ < 1, P(Z 1 t Z 2 t) P(Z 1 t) cɛ. Lemma 5.1 has the following immediate consequence for M ɛ (A s,α,n ): Corollary 5.2. There exists a constant c such that for any s and α, lim M ɛ(a s,α,n ) cɛ. n By comparison, the trivial protocol A = {x x 1 = = x k = 0} has M ɛ (A) = 1 ln s ln ( 1 1 (1 s 1 )ɛ ) C ɛ ln s. In particular, for a fixed success probability and a sufficiently large alphabet s, the trivial protocol recovers c ln s times as many symbols as the Hamming ball protocol. 13
16 Proof of Corollary 5.2. According to (7) and (8), M ɛ (A s,α,n ) log P(Z 1 αs s 1 Z 2 αs log P(Z 1 αs s 1 ) s 1 ) Now apply Lemma 5.1 to the numerator (recalling that the denominator is negative): log P(Z 1 αs ) cɛ s 1 lim M ɛ (A s,α,n ) = cɛ. log P(Z 1 αs ) s 1 Proof of Lemma 5.1. The proof makes use of the Ornstein-Uhlenbeck semigroup P t, defined by where Z N (0, 1). states that (P τ f)(x) = Ef(e τ x + 1 e 2τ Z), The Nelson-Gross [2, 7] hypercontractive inequality (EP τ f(z) q ) 1/q (E f(z) p ) 1/p (9) whenever q 1 + e 2τ (p 1). If we set f(x) = 1 x t and τ = log(1 ɛ), then P(Z 1, Z 2 t) = Ef(Z 1 )f(z 2 ) = Ef(Z)P τ f(z) = E(P τ/2 f(z)) 2. Thus, (9) with q = 2 and p = 1 + e 2τ = 1 + (1 ɛ) 2 implies that 2 P(Z 1, Z 2 t) (Ef(Z)) 1+(1 ɛ) 2 = P(Z 2 t) 1+(1 ɛ) 2 P(Z 2 t) 1+cɛ. 2. Hence, P(Z 1 t Z 2 t) P(Z 1 t, Z 2 t) P(Z 2 t) P(Z 2 t) cɛ. References [1] A. Bogdanov and E. Mossel. On extracting common random bits from correlated sources. IEEE Transactions on information theory, 57(10): , Arxiv [2] Leonard Gross. Logarithmic Sobolev inequalities. Amer. J. Math., 97(4): ,
17 [3] D. Lim, J.W. Lee, B. Gassend, G.E. Suh, M. Van Dijk, and S. Devadas. Extracting secret keys from integrated circuits. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 13(10): , [4] E. Mossel, R. O Donnell, and K. Oleszkiewicz. Noise stability of functions with low influences: invariance and optimality (extended abstract). In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), October 2005, Pittsburgh, PA, USA, Proceedings, pages IEEE Computer Society, [5] E. Mossel, R. O Donnell, O. Regev, J. E. Steif, and B. Sudakov. Noninteractive correlation distillation, inhomogeneous Markov chains, and the reverse Bonami-Beckner inequality. Israel J. Math., 154: , [6] E. Mossel, K. Oleszkiewicz, and A. Sen. On reverse hypercontractivity [7] Edward Nelson. The free Markoff field. J. Functional Analysis, 12: , [8] K. Oleszkiewicz. On a nonsymmetric version of the Khinchine-Kahane inequality. Progress In Probability, 56: , [9] Y. Su, J. Holleman, and B.P. Otis. A digital 1.6 pj/bit chip identification circuit using process variations. Solid-State Circuits, IEEE Journal of, 43(1):69 77, [10] P. Wolff. Hypercontractivity of simple random variables. Studia Mathematica, pages , [11] Ke Yang. On the (im)possibility of non-interactive correlation distillation. Theoretical Computer Science, 382(2): , [12] H. Yu, P.H.W. Leong, H. Hinkelmann, L. Moller, M. Glesner, and P. Zipf. Towards a unique FPGA-based identification circuit using process variations. In 19th International Conference on Field Programmable Logic and Applications, pages IEEE,
An exponential separation between quantum and classical one-way communication complexity
An exponential separation between quantum and classical one-way communication complexity Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical
More informationFourier analysis of boolean functions in quantum computation
Fourier analysis of boolean functions in quantum computation Ashley Montanaro Centre for Quantum Information and Foundations, Department of Applied Mathematics and Theoretical Physics, University of Cambridge
More informationNon-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon
Non-interactive Simulation of Joint Distributions: The Hirschfeld-Gebelein-Rényi Maximal Correlation and the Hypercontractivity Ribbon Sudeep Kamath and Venkat Anantharam EECS Department University of
More informationMixing in Product Spaces. Elchanan Mossel
Poincaré Recurrence Theorem Theorem (Poincaré, 1890) Let f : X X be a measure preserving transformation. Let E X measurable. Then P[x E : f n (x) / E, n > N(x)] = 0 Poincaré Recurrence Theorem Theorem
More informationAsymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University
More informationFKN Theorem on the biased cube
FKN Theorem on the biased cube Piotr Nayar Abstract In this note we consider Boolean functions defined on the discrete cube { γ, γ 1 } n equipped with a product probability measure µ n, where µ = βδ γ
More informationReverse Hyper Contractive Inequalities: What? Why?? When??? How????
Reverse Hyper Contractive Inequalities: What? Why?? When??? How???? UC Berkeley Warsaw Cambridge May 21, 2012 Hyper-Contractive Inequalities The noise (Bonami-Beckner) semi-group on { 1, 1} n 1 2 Def 1:
More informationSCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE
SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE BETSY STOVALL Abstract. This result sharpens the bilinear to linear deduction of Lee and Vargas for extension estimates on the hyperbolic paraboloid
More informationNotes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr.
Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr. Chapter : Logic Topics:. Statements, Negation, and Compound Statements.2 Truth Tables and Logical Equivalences.3
More informationAn upper bound on l q norms of noisy functions
Electronic Collouium on Computational Complexity, Report No. 68 08 An upper bound on l norms of noisy functions Alex Samorodnitsky Abstract Let T ɛ, 0 ɛ /, be the noise operator acting on functions on
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More informationOptimal compression of approximate Euclidean distances
Optimal compression of approximate Euclidean distances Noga Alon 1 Bo az Klartag 2 Abstract Let X be a set of n points of norm at most 1 in the Euclidean space R k, and suppose ε > 0. An ε-distance sketch
More informationWEIERSTRASS THEOREMS AND RINGS OF HOLOMORPHIC FUNCTIONS
WEIERSTRASS THEOREMS AND RINGS OF HOLOMORPHIC FUNCTIONS YIFEI ZHAO Contents. The Weierstrass factorization theorem 2. The Weierstrass preparation theorem 6 3. The Weierstrass division theorem 8 References
More informationError Correcting Codes Questions Pool
Error Correcting Codes Questions Pool Amnon Ta-Shma and Dean Doron January 3, 018 General guidelines The questions fall into several categories: (Know). (Mandatory). (Bonus). Make sure you know how to
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationHypercontractivity of spherical averages in Hamming space
Hypercontractivity of spherical averages in Hamming space Yury Polyanskiy Department of EECS MIT yp@mit.edu Allerton Conference, Oct 2, 2014 Yury Polyanskiy Hypercontractivity in Hamming space 1 Hypercontractivity:
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationMath 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1
Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,
More informationNondeterminism LECTURE Nondeterminism as a proof system. University of California, Los Angeles CS 289A Communication Complexity
University of California, Los Angeles CS 289A Communication Complexity Instructor: Alexander Sherstov Scribe: Matt Brown Date: January 25, 2012 LECTURE 5 Nondeterminism In this lecture, we introduce nondeterministic
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More informationCS Communication Complexity: Applications and New Directions
CS 2429 - Communication Complexity: Applications and New Directions Lecturer: Toniann Pitassi 1 Introduction In this course we will define the basic two-party model of communication, as introduced in the
More informationDecomposing Bent Functions
2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions
More informationHomework 1 Solutions
MATH 171 Spring 2016 Problem 1 Homework 1 Solutions (If you find any errors, please send an e-mail to farana at stanford dot edu) Presenting your arguments in steps, using only axioms of an ordered field,
More informationIntroductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19
Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each
More informationA Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones
A Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones Abstract We present a regularity lemma for Boolean functions f : {, } n {, } based on noisy influence, a measure of how locally correlated
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationRamsey-type problem for an almost monochromatic K 4
Ramsey-type problem for an almost monochromatic K 4 Jacob Fox Benny Sudakov Abstract In this short note we prove that there is a constant c such that every k-edge-coloring of the complete graph K n with
More informationRandom Feature Maps for Dot Product Kernels Supplementary Material
Random Feature Maps for Dot Product Kernels Supplementary Material Purushottam Kar and Harish Karnick Indian Institute of Technology Kanpur, INDIA {purushot,hk}@cse.iitk.ac.in Abstract This document contains
More informationPERFECTLY secure key agreement has been studied recently
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract
More information= 1 2x. x 2 a ) 0 (mod p n ), (x 2 + 2a + a2. x a ) 2
8. p-adic numbers 8.1. Motivation: Solving x 2 a (mod p n ). Take an odd prime p, and ( an) integer a coprime to p. Then, as we know, x 2 a (mod p) has a solution x Z iff = 1. In this case we can suppose
More informationCross-Intersecting Sets of Vectors
Cross-Intersecting Sets of Vectors János Pach Gábor Tardos Abstract Given a sequence of positive integers p = (p 1,..., p n ), let S p denote the set of all sequences of positive integers x = (x 1,...,
More informationCS Foundations of Communication Complexity
CS 49 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 The Discrepancy Method Cont d In the previous lecture we ve outlined the discrepancy method, which is a method for getting lower
More informationEvidence that the Diffie-Hellman Problem is as Hard as Computing Discrete Logs
Evidence that the Diffie-Hellman Problem is as Hard as Computing Discrete Logs Jonah Brown-Cohen 1 Introduction The Diffie-Hellman protocol was one of the first methods discovered for two people, say Alice
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationValuations. 6.1 Definitions. Chapter 6
Chapter 6 Valuations In this chapter, we generalize the notion of absolute value. In particular, we will show how the p-adic absolute value defined in the previous chapter for Q can be extended to hold
More informationEconomics 204 Fall 2011 Problem Set 1 Suggested Solutions
Economics 204 Fall 2011 Problem Set 1 Suggested Solutions 1. Suppose k is a positive integer. Use induction to prove the following two statements. (a) For all n N 0, the inequality (k 2 + n)! k 2n holds.
More informationPOWER SERIES AND ANALYTIC CONTINUATION
POWER SERIES AND ANALYTIC CONTINUATION 1. Analytic functions Definition 1.1. A function f : Ω C C is complex-analytic if for each z 0 Ω there exists a power series f z0 (z) := a n (z z 0 ) n which converges
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationKatarzyna Mieczkowska
Katarzyna Mieczkowska Uniwersytet A. Mickiewicza w Poznaniu Erdős conjecture on matchings in hypergraphs Praca semestralna nr 1 (semestr letni 010/11 Opiekun pracy: Tomasz Łuczak ERDŐS CONJECTURE ON MATCHINGS
More informationA NEW SET THEORY FOR ANALYSIS
Article A NEW SET THEORY FOR ANALYSIS Juan Pablo Ramírez 0000-0002-4912-2952 Abstract: We present the real number system as a generalization of the natural numbers. First, we prove the co-finite topology,
More informationBoolean Functions: Influence, threshold and noise
Boolean Functions: Influence, threshold and noise Einstein Institute of Mathematics Hebrew University of Jerusalem Based on recent joint works with Jean Bourgain, Jeff Kahn, Guy Kindler, Nathan Keller,
More informationLocal Fields. Chapter Absolute Values and Discrete Valuations Definitions and Comments
Chapter 9 Local Fields The definition of global field varies in the literature, but all definitions include our primary source of examples, number fields. The other fields that are of interest in algebraic
More informationThe decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t
The decomposability of simple orthogonal arrays on 3 symbols having t + 1 rows and strength t Wiebke S. Diestelkamp Department of Mathematics University of Dayton Dayton, OH 45469-2316 USA wiebke@udayton.edu
More information2. Metric Spaces. 2.1 Definitions etc.
2. Metric Spaces 2.1 Definitions etc. The procedure in Section for regarding R as a topological space may be generalized to many other sets in which there is some kind of distance (formally, sets with
More informationMATH 202B - Problem Set 5
MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there
More informationMASTERS EXAMINATION IN MATHEMATICS SOLUTIONS
MASTERS EXAMINATION IN MATHEMATICS PURE MATHEMATICS OPTION SPRING 010 SOLUTIONS Algebra A1. Let F be a finite field. Prove that F [x] contains infinitely many prime ideals. Solution: The ring F [x] of
More informationAhlswede Khachatrian Theorems: Weighted, Infinite, and Hamming
Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of
More informationCombinatorial Proof of the Hot Spot Theorem
Combinatorial Proof of the Hot Spot Theorem Ernie Croot May 30, 2006 1 Introduction A problem which has perplexed mathematicians for a long time, is to decide whether the digits of π are random-looking,
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationLecture 21: P vs BPP 2
Advanced Complexity Theory Spring 206 Prof. Dana Moshkovitz Lecture 2: P vs BPP 2 Overview In the previous lecture, we began our discussion of pseudorandomness. We presented the Blum- Micali definition
More informationConstructive bounds for a Ramsey-type problem
Constructive bounds for a Ramsey-type problem Noga Alon Michael Krivelevich Abstract For every fixed integers r, s satisfying r < s there exists some ɛ = ɛ(r, s > 0 for which we construct explicitly an
More informationconverges as well if x < 1. 1 x n x n 1 1 = 2 a nx n
Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists
More informationQuantum Communication Complexity
Quantum Communication Complexity Ronald de Wolf Communication complexity has been studied extensively in the area of theoretical computer science and has deep connections with seemingly unrelated areas,
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationb = 10 a, is the logarithm of b to the base 10. Changing the base to e we obtain natural logarithms, so a = ln b means that b = e a.
INTRODUCTION TO CRYPTOGRAPHY 5. Discrete Logarithms Recall the classical logarithm for real numbers: If we write b = 10 a, then a = log 10 b is the logarithm of b to the base 10. Changing the base to e
More informationFormal Groups. Niki Myrto Mavraki
Formal Groups Niki Myrto Mavraki Contents 1. Introduction 1 2. Some preliminaries 2 3. Formal Groups (1 dimensional) 2 4. Groups associated to formal groups 9 5. The Invariant Differential 11 6. The Formal
More informationCartesian Products and Relations
Cartesian Products and Relations Definition (Cartesian product) If A and B are sets, the Cartesian product of A and B is the set A B = {(a, b) : (a A) and (b B)}. The following points are worth special
More information< k 2n. 2 1 (n 2). + (1 p) s) N (n < 1
List of Problems jacques@ucsd.edu Those question with a star next to them are considered slightly more challenging. Problems 9, 11, and 19 from the book The probabilistic method, by Alon and Spencer. Question
More informationCharacterising Probability Distributions via Entropies
1 Characterising Probability Distributions via Entropies Satyajit Thakor, Terence Chan and Alex Grant Indian Institute of Technology Mandi University of South Australia Myriota Pty Ltd arxiv:1602.03618v2
More informationMAGIC010 Ergodic Theory Lecture Entropy
7. Entropy 7. Introduction A natural question in mathematics is the so-called isomorphism problem : when are two mathematical objects of the same class the same (in some appropriately defined sense of
More informationOn the (Im)possibility of Non-interactive Correlation Distillation
On the (Im)possibility of Non-interactive Correlation Distillation Ke Yang Computer Science Department, Carnegie Mellon University, 5000 Forbes Ave. Pittsburgh, PA 15213, USA; yangke@cs.cmu.edu December
More informationP-adic Functions - Part 1
P-adic Functions - Part 1 Nicolae Ciocan 22.11.2011 1 Locally constant functions Motivation: Another big difference between p-adic analysis and real analysis is the existence of nontrivial locally constant
More informationLINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday
LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD JAN-HENDRIK EVERTSE AND UMBERTO ZANNIER To Professor Wolfgang Schmidt on his 75th birthday 1. Introduction Let K be a field
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationChapter 9: Basic of Hypercontractivity
Analysis of Boolean Functions Prof. Ryan O Donnell Chapter 9: Basic of Hypercontractivity Date: May 6, 2017 Student: Chi-Ning Chou Index Problem Progress 1 Exercise 9.3 (Tightness of Bonami Lemma) 2/2
More informationSome Sieving Algorithms for Lattice Problems
Foundations of Software Technology and Theoretical Computer Science (Bangalore) 2008. Editors: R. Hariharan, M. Mukund, V. Vinay; pp - Some Sieving Algorithms for Lattice Problems V. Arvind and Pushkar
More informationPolar Codes for Arbitrary DMCs and Arbitrary MACs
Polar Codes for Arbitrary DMCs and Arbitrary MACs Rajai Nasser and Emre Telatar, Fellow, IEEE, School of Computer and Communication Sciences, EPFL Lausanne, Switzerland Email: rajainasser, emretelatar}@epflch
More informationCONTINUED FRACTIONS, PELL S EQUATION, AND TRANSCENDENTAL NUMBERS
CONTINUED FRACTIONS, PELL S EQUATION, AND TRANSCENDENTAL NUMBERS JEREMY BOOHER Continued fractions usually get short-changed at PROMYS, but they are interesting in their own right and useful in other areas
More informationBounds for pairs in partitions of graphs
Bounds for pairs in partitions of graphs Jie Ma Xingxing Yu School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0160, USA Abstract In this paper we study the following problem of Bollobás
More informationTesting Equality in Communication Graphs
Electronic Colloquium on Computational Complexity, Report No. 86 (2016) Testing Equality in Communication Graphs Noga Alon Klim Efremenko Benny Sudakov Abstract Let G = (V, E) be a connected undirected
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationBayesian Nonparametric Point Estimation Under a Conjugate Prior
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda
More informationFast Rates for Estimation Error and Oracle Inequalities for Model Selection
Fast Rates for Estimation Error and Oracle Inequalities for Model Selection Peter L. Bartlett Computer Science Division and Department of Statistics University of California, Berkeley bartlett@cs.berkeley.edu
More informationMATH 131A: REAL ANALYSIS (BIG IDEAS)
MATH 131A: REAL ANALYSIS (BIG IDEAS) Theorem 1 (The Triangle Inequality). For all x, y R we have x + y x + y. Proposition 2 (The Archimedean property). For each x R there exists an n N such that n > x.
More informationDepartment of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016
Department of Mathematics, University of California, Berkeley YOUR 1 OR 2 DIGIT EXAM NUMBER GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016 1. Please write your 1- or 2-digit exam number on
More informationHOMEWORK ASSIGNMENT 6
HOMEWORK ASSIGNMENT 6 DUE 15 MARCH, 2016 1) Suppose f, g : A R are uniformly continuous on A. Show that f + g is uniformly continuous on A. Solution First we note: In order to show that f + g is uniformly
More informationA Threshold of ln(n) for approximating set cover
A Threshold of ln(n) for approximating set cover October 20, 2009 1 The k-prover proof system There is a single verifier V and k provers P 1, P 2,..., P k. Binary code with k codewords, each of length
More informationHigher-order Fourier analysis of F n p and the complexity of systems of linear forms
Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Hamed Hatami School of Computer Science, McGill University, Montréal, Canada hatami@cs.mcgill.ca Shachar Lovett School
More informationIsomorphisms between pattern classes
Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.
More informationGroups of Prime Power Order with Derived Subgroup of Prime Order
Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*
More informationON THE COMPUTABILITY OF PERFECT SUBSETS OF SETS WITH POSITIVE MEASURE
ON THE COMPUTABILITY OF PERFECT SUBSETS OF SETS WITH POSITIVE MEASURE C. T. CHONG, WEI LI, WEI WANG, AND YUE YANG Abstract. A set X 2 ω with positive measure contains a perfect subset. We study such perfect
More informationA note on general sliding window processes
A note on general sliding window processes Noga Alon Ohad N. Feldheim September 19, 214 Abstract Let f : R k [r] = {1, 2,..., r} be a measurable function, and let {U i } i N be a sequence of i.i.d. random
More informationGraphs with few total dominating sets
Graphs with few total dominating sets Marcin Krzywkowski marcin.krzywkowski@gmail.com Stephan Wagner swagner@sun.ac.za Abstract We give a lower bound for the number of total dominating sets of a graph
More informationON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM
ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM ALEXANDER KUPERS Abstract. These are notes on space-filling curves, looking at a few examples and proving the Hahn-Mazurkiewicz theorem. This theorem
More informationAsymptotic redundancy and prolixity
Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends
More informationNotes on Zero Knowledge
U.C. Berkeley CS172: Automata, Computability and Complexity Handout 9 Professor Luca Trevisan 4/21/2015 Notes on Zero Knowledge These notes on zero knowledge protocols for quadratic residuosity are based
More informationThe 123 Theorem and its extensions
The 123 Theorem and its extensions Noga Alon and Raphael Yuster Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Tel Aviv, Israel Abstract It is shown
More informationChapter 8. P-adic numbers. 8.1 Absolute values
Chapter 8 P-adic numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics 58, Springer Verlag 1984, corrected 2nd printing 1996, Chap.
More informationThus f is continuous at x 0. Matthew Straughn Math 402 Homework 6
Matthew Straughn Math 402 Homework 6 Homework 6 (p. 452) 14.3.3, 14.3.4, 14.3.5, 14.3.8 (p. 455) 14.4.3* (p. 458) 14.5.3 (p. 460) 14.6.1 (p. 472) 14.7.2* Lemma 1. If (f (n) ) converges uniformly to some
More informationEntropy and Ergodic Theory Lecture 15: A first look at concentration
Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that
More informationBERNOULLI ACTIONS AND INFINITE ENTROPY
BERNOULLI ACTIONS AND INFINITE ENTROPY DAVID KERR AND HANFENG LI Abstract. We show that, for countable sofic groups, a Bernoulli action with infinite entropy base has infinite entropy with respect to every
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationProofs for Large Sample Properties of Generalized Method of Moments Estimators
Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my
More informationLecture 3: Error Correcting Codes
CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error
More informationOn Locating-Dominating Codes in Binary Hamming Spaces
Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and
More informationMATH 409 Advanced Calculus I Lecture 12: Uniform continuity. Exponential functions.
MATH 409 Advanced Calculus I Lecture 12: Uniform continuity. Exponential functions. Uniform continuity Definition. A function f : E R defined on a set E R is called uniformly continuous on E if for every
More information18.5 Crossings and incidences
18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its
More information