Brute force searching, the typical set and Guesswork

Size: px
Start display at page:

Download "Brute force searching, the typical set and Guesswork"

Transcription

1 Brute force searching, the typical set and Guesswor Mar M Christiansen and Ken R Duffy Hamilton Institute National University of Ireland, Maynooth marchristiansen, enduffy@nuimie Flávio du Pin Calmon and Muriel Médard Research Laboratory of Electronics Massachusetts Institute of Technology flavio, medard@mitedu Abstract Consider the situation where a word is chosen probabilistically from a finite list If an attacer nows the list and can inquire about each word in turn, then selecting the word via the uniform distribution maximizes the attacer s difficulty, its Guesswor, in identifying the chosen word It is tempting to use this property in cryptanalysis of computationally secure ciphers by assuming coded words are drawn from a source s typical set and so, for all intents and purposes, uniformly distributed within it By applying recent results on Guesswor, for iid sources it is this equipartition ansatz that we investigate here In particular, we demonstrate that the expected Guesswor for a source conditioned to create words in the typical set grows, with word length, at a lower exponential rate than that of the uniform approximation, suggesting use of the approximation is ill-advised I INTRODUCTION Consider the problem of identifying the value of a discrete random variable by only asing questions of the sort: is its value X? That this is a time-consuming tas is a cornerstone of computationally secure ciphers [] It is tempting to appeal to the Asymptotic Equipartition Property AEP) [2], and the resulting assignment of code words only to elements of the typical set of the source, to justify restriction to consideration of a uniform source, eg [3], [4], [5] This assumed uniformity has many desirable properties, including maximum obfustication and difficulty for the inquisitor, eg [6] In typical set coding it is necessary to generate codes for words whose logarithmic probability is within a small distance of the word length times the specific Shannon entropy As a result, while all these words have near-equal lielihood, the distribution is not precisely uniform It is the consequence of this lac of perfect uniformity that we investigate here by proving that results on Guesswor [7], [8], [9], [0], [] extend to this setting We establish that for source words originally constructed from an iid sequence of letters, as a function of word length it is exponentially easier to guess a word conditioned to be in the source s typical set in comparison to the corresponding equipartition approximation This raises questions about the wisdom of appealing to the AEP to justify sole consideration of the uniform distributions for cryptanalysis and provides alternate results in their place II THE TYPICAL SET AND GUESSWORK Let A 0,, m be a finite alphabet and consider a stochastic sequence of words, W, where W is a word of length taing values in A The process W has specific Shannon entropy H W : P W w) log P W w), w A and we shall tae all logs to base e For > 0, the typical set of words of length is T : w A : e H W +) P W w) e H W ) For most reasonable sources [2], P W T ) > 0 for all sufficiently large and typical set encoding results in a new source of words of length, W, with statistics P W w) P W w) P W T ) if w T, ) 0 if w / T Appealing to the AEP, these distributions are often substituted for their more readily manipulated uniformly random counterpart, U, P U w) : T if w T, 2) 0 if w / T, where T is the number of elements in T While the distribution of W is near-uniform for large, it is not perfectly uniform unless the original W was uniformly distributed on a subset of A Is a word selected using the distribution of W easier to guess than if it was selected uniformly, U? Given nowledge of A, the source statistics of words, say those of W, and an oracle against which a word can be tested one at a time, an attacer s optimal strategy is to generate a partial-order of the words from most liely to least liely and guess them in turn [2], [7] That is, the attacer generates a function G : A,, m such that Gw ) < Gw) if P W w ) > P W w) The integer Gw) is the number of guesses until word w is guessed, its Guesswor For fixed it is shown in [2] that the Shannon entropy of the underlying distribution bears little relation to the expected Guesswor, EGW )), the average number of guesses required to guess a word chosen with distribution W using the optimal strategy In a series of subsequent papers [7], [8], [9], [0], under ever less restrictive stochastic assumptions from words made up of iid letters to Marovian letters to sofic shifts, an asymptotic relationship as word length grows

2 between scaled moments of the Guesswor and specific Rényi entropy was identified: ) log EGW ) α ) αr W, 3) + α for α >, where R W β) is the specific Rényi entropy for the process W with parameter β > 0, R W β) : β log P W w) β w A These results have recently [] been built on to prove that log GW ) satisfies a Large Deviation Principle LDP), eg [3] Define the scaled Cumulant Generating Function scgf) of log GW ) by ) Λ W α) : log E e α log GW ) for α R and mae the following two assumptions Assumption : For α >, the scgf Λ W α) exists, is equal to αr W / + α)) and has a continuous derivative in that range Assumption 2: The it g W : log P GW ) ) 4) exists in, 0] Should assumptions and 2 hold, Theorem 3 of [] establishes that Λ W α) g W for all α and that the sequence log GW ) satisfies a LDP with a rate function given by the Legendre Fenchel transform of the scgf, Λ W x) : sup α Rxα Λ W α) Assumption is motivated by equation 3), while the Assumption 2 is a regularity condition on the probability of the most liely word With d γ W : α dα Λ W α), 5) where the order of the size of the set of maximum probability words of W is expγ W ) [], Λ W x) can be identified as x g W if x [0, γ W ] sup α R xα Λ W α) if x γ W, logm)], 6) + if x / [0, logm)] Corollary 5 of [] uses this LDP to prove a result suggested in [4], [5], that ElogGW ))) H W, 7) maing clear that the specific Shannon entropy determines the expectation of the logarithm of the number of guesses to guess the word W The growth rate of the expected Guesswor is a distinct quantity whose scaling rules can be determined directly from the scgf in equation 3), log EGW )) Λ W ) From these expressions and Jensen s inequality, it is clear that the growth rate of the expected Guesswor is less than H W Finally, as a corollary to the LDP, [] provides the following approximation to the Guesswor distribution for large : P GW ) n) n exp Λ W log n) ) 8) for n,, m Thus to approximate the Guesswor distribution, it is sufficient to now the specific Rényi entropy of the source and the decay-rate of the lielihood of the sequence of most liely words Here we show that if W is constructed from iid letters, then both of the processes U and W also satisfy Assumptions and 2 so that, with the appropriate rate functions, the approximation in equation 8) can be used with U or W in lieu of W This enables us to compare the Guesswor distribution for typical set encoded words with their assumed uniform counterpart Even in the simple binary alphabet case we establish that, apart from edge cases, a word chosen via W is exponential easier in to guess on average than one chosen via U III STATEMENT OF MAIN RESULTS Assume that the words W are made of iid letters, defining p p 0,, p m ) by p a P W a) We shall employ the following short-hand: hl) : a l a log l a a l a, for l l 0,, l m ) [0, ] m, l a 0, so that H W hp), and Dl p) : a l a logp a /l a ) Furthermore, define l [0, ] m and l + [0, ] m l arg maxhl) : hl) + Dl p) hp), l 9) l + arg maxhl) : hl) + Dl p) + hp), l 0) should they exist For α >, also define l W α) and ηα) by l W a α) : ηα) : a p /+α)) a b A p/+α)) b l W a log p a for all a A and ) p/+α) a b A p/+α) b log p a 2) Assume that hp)+ logm) If this is not the case, logm) should be substituted in place of hl ) for the U results Proofs of the following are deferred to the Appendix Lemma : Assumption holds for U and W with and Λ U α) : αhl ) Λ W α) αhl α)) Dl α) p), where l + if ηα) hp), l α) l W α) if ηα) hp), hp) + ), l if ηα) hp) + 3)

3 Lemma 2: Assumption 2 holds for U and W with g U g W hl ) and ) min hp) +, log max p a Thus by direct evaluation of the scgfs at α, log EGU )) hl ) and log EGW )) Λ W ) As the conditions of Theorem 3 [] are satisfied ElogGU )) Λ U 0) hl ) and ElogGW )) Λ W 0) hp), and we have the approximations P GU ) n) n exp Λ U log n) ) and P GW ) n) n exp Λ W log n) ) IV EXAMPLE Consider a binary alphabet A 0, and words W constructed of iid letters with P W 0) p 0 > /2 In this case there are unique l and l + satisfying equations 9) and 0) determined by: l0 p 0 logp 0 ) log p 0 ), l 0 + p 0 + logp 0 ) log p 0 ) Selecting 0 < < logp 0 ) log p 0 )) minp 0 /2, p 0 ) ensures that the typical set is growing more slowly than 2 and that /2 < l0 < p 0 < l 0 + < With l W α) defined in equation ), from equations 3) and 4) we have that logp 0 ) if α <, Λ W α) αhl W α)) Dl W α) p), if α logp 0 ) if α <, + α) log p +α 0 + p 0 ) +α From Lemmas and 2 we obtain hl Λ U α) ) if α <, αhl ) if α, and Λ W α) ) if α, hp) + if α, αhl α)) Dl α) p) if α, where l α) is deinfed in equation 3) and ηα) defined in equation 2) With γ defined in equation 5), we have γ W 0, γ U hl ) and γ W hl + ) so that, as hl ) > hl + ), the ordering of the growth rates with word length of the set of most liely words from smallest to largest is: unconditioned source, conditioned source and uniform approximation From these scgf equations, we can determine the average growth rates and estimates on the Guesswor distribution In particular, we have that ElogGW ))) Λ W 0) hp), ElogGW ))) Λ W 0) hp), ElogGU ))) Λ U 0) hl ) As hx, x)) is monotonically decreasing for x > /2 and /2 < l0 < p 0, the expectation of the logarithm of the Guesswor is growing faster for the uniform approximation than for either the unconditioned or conditioned word source The growth rate of the expected Guesswor reveals more features In particular, with A η) hp) + ), log EGW )) 2 logp p 0) 2 ), log EGW )) log EGU )) hl ) 2 logp p 0 ) 2 ), A 0 hl ) Dl p), A > 0 For the growth rate of the expected Guesswor, from these it can be shown that there is no strict order between the unconditioned and uniform source, but there is a strict ordering between the the uniform approximation and the true conditioned distribution, with the former being strictly larger With /0 and for a range of p 0, these formulae are illustrated in Figure The top line plots ElogGU )) loggw ))) ElogGU )) loggw))) hl ) hp), showing that the expected growth rate in the logarithm of the Guesswor is always higher for the uniform approximation than both the conditioned and unconditioned sources The second highest line plots the difference in growth rates of the expected Guesswor of the uniform approximation and the true conditioned source log EGU )) EGW )) hl ) 2 logp p 0 ) 2 ) if η) hp) + Dl p) if η) > hp) + That this difference is always positive, which can be established readily analytically, shows that the expected Guesswor of the true conditioned source is growing at a slower exponential rate than the uniform approximation The second line and the lowest line, the growth rates of the uniform and

4 0 02 x W * x) Difference in expected growth rate U 0) W 0) U ) W ) U ) W ) * x W x) x * U x) p x Fig Bernoullip 0, p 0 ) source Difference in exponential growth rates of Guesswor between uniform approximation, unconditioned and conditioned distribution with 0 Top curve is the difference in expected logarithms between the uniform approximation and both the conditioned and unconditioned word sources Bottom curve is the log-ratio of the expected Guesswor of the uniform and unconditioned word sources, with the latter harder to guess for large p 0 Middle curve is the log-ratio of the uniform and conditioned word sources, which initially follows the lower line, before separating and staying positive, showing that the conditioned source is always easier to guess than the typically used uniform approximation unconditioned expected Guesswor log EGU )) EGW )) hl ) 2 logp p 0) 2 ), initially agree It can, depending on p 0 and, be either positive or negative It is negative if the typical set is particularly small in comparison to the number of unconditioned words For p 0 8/0, the typical set is growing sufficiently quicly that a word selected from the uniform approximation is easier to guess than for unconditioned source For this value, we illustrate the difference in Guesswor distributions between the unconditioned W, conditioned W and uniform U word sources If we used the approximation in 8) directly, the graph would not be informative as the range of the unconditioned source is growing exponentially faster than the other two Instead Figure 2 plots x Λ x) for each of the three processes That is, using equation 8) and its equivalents for the other two processes, it plots log Gw), where Gw),, 2, against the large deviation approximations to log P W w), log P W w) and log P U w), as the resulting plot is unchanging in The source of the discrepancy in expected Guesswor is apparent, with the unconditioned source having substantially more words to cover due to the log x-scale) Both it and the true conditioned sources having higher probability words that sew their Guesswor Fig 2 Bernoulli8/0, 2/0) source, 0 Guesswor distribution approximations For large, x-axis is x / log Gw) for Gw),, 2 and the y-axis is the large deviation approximation / log P X w) x Λ X x) for X W, W and X U The first plateau for the conditioned and uniform distributions correspond to those words with maximum highest probability slowest exponential decay-rate) V CONCLUSION By establishing that the expected Guesswor of a source conditioned on the typical set is growing with a smaller exponent than its usual uniform approximation, we have demonstrated that appealing to the AEP for the latter is erroneous in cryptanalysis and instead provide a correct methodology for identifying the Guesswor growth rate T APPENDIX Note that by the definition of T as a typical set, P W ) > for all sufficiently large and thus log P W T) 0, which we will use in the proofs of both lemmas The proportion of the letter a A in a word w w,, w ) A is given by n w, a) : i : w i a The number of words in a type l, where l [0, ] for all a A and l a, is given by N l) : w A such that n w, a) l a a A The set of all types, those just in the typical set and smooth

5 approximations to those in the typical set are denoted L : l : w A such that n w, a) l a a A, L : l : w T, such that n w, a) l a a A, L : l : a l a log p a [ hp), hp) + ] where it can readily seen that L L for all For U we need the following Lemma Lemma 3: The exponential growth rate of the size of the typical set is log T log m if log m hp) + hl ) otherwise where l is defined in equation 9) PROOF: For fixed, by the union bound max l L! l a)! T + ) m max l L! l a)! For the logarithmic it, these two bounds coincide so consider the concave optimization problem max l L! l a)! We can upper bound this optimization by replacing L with the smoother version, its superset L Using Stirling s bound we have that sup log sup sup l L hl)! l L l a)! logm) hl ) if hp) + logm) if hp) + < logm) For the lower bound, we need to construct a sequence l ) such that l ) L for all sufficiently large and hl) ) converges to either logm) or hl ), as appropriate Let l /m,, /m) or l respectively, letting c arg max p a and define l ) a la + l b if a c, b A la if a c Then l ) L for all > m logp c)/2) and hl ) ) hl ), as required PROOF: Proof of Lemma Considering U ) first, αr U α + α log T αhl ), by Lemma 3 To evaluate Λ U α), as for any n N and α > 0 n n i α x α dx, i 0, again using Lemma 3 we have αhl ) log + α T α log GU log Eeα ) ) log T T i α i log T α αhl ), where we have used Lemma 3 The reverse of these bounds holds for α, 0], giving the result We brea the argument for W into three steps Step is to show the equivalence of the existence of Λ W α) and αr W / + α)) for α > with the existence of the following it log max N l) +α p la l L a 4) Step 2 then establishes this it and identifies it Step 3 shows that Λ W α) is continuous for α > To achieve steps and 2, we adopt and adapt the method of types argument employed in the elongated web-version of [8] Step Two changes from the bounds of [8] Lemma 55 are necessary: the consideration of non-iid sources by restriction to T ; and the extension of the α range to include α, 0] from that for α 0 given in that document Adjusted for conditioning on the typical set we get N + α max l) +α pla a l L w T P W w) Ee α log GW ) ) 5) N l) +α + ) m+α) max l L pla a w T P W w) The necessary modification of these inequalities for α, 0] gives N l) +α max l L pla a w T P W w) Ee α log GW ) ) 6) + ) N m + α max l) +α pla a l L w T P W w) To show the lower bound holds if α, 0] let l arg max N l) +α pla a l L w T P W w) Taing inf log and sup log of equations 5) and 6) establishes that if the it 4) exists, Λ W α) exists and equals it Similar inequalities provide the same result for αr W / + α))

6 Step 2 The problem has been reduced to establishing the existence of log max N l) +α p la l L a and identifying it The method of proof is similar to that employed in Lemma : we provide an upper bound for the sup and then establish a corresponding lower bound If l ) l with l ) L, then using Stirling s bounds we have that log N l ) ) hl) This convergence occurs uniformly in l and so, as L L for all, sup sup l L log max N l) +α p la l L a + α)hl) + ) l a log p a a sup l L αhl) Dl p)) 7) This is a concave optimization problem in l with convex constraints Not requiring l L, the unconstrained optimizer over all l is attained at l W α) defined in equation ), which determines ηα) in equation 2) Thus the optimizer of the constrained problem 7) can be identified as that given in equation 3) Thus we have that sup log max l L N l) +α p la a αhl α)) + Dl α) p), where l α) is defined in equation 3) We complete the proof by generating a matching lower bound To do so, for given l α) we need only create a sequence such that l ) l α) and l ) L for all If l α) l, then the sequence used in the proof of Lemma 3 suffices For l α) l +, we use the same sequence but with floors in lieu of ceilings and the surplus probability distributed to a least liely letter instead of a most liely letter For l α) l W α), either of these sequences can be used Step 3 As Λ W α) αhl α)) Dl α) p), with l α) defined in equation 3), d dα Λ W α) hl α)) + Λ W α) d dα l α) Thus to establish continuity it suffices to establish continuity of l α) and its derivative, which can be done readily by calculus PROOF: Proof of Lemma 2 First consider g U max log P U w T w) log T hl ), using Lemma 3 For W, if g W < hp) + the result follows simply, so assume that this is not the case By the property mentioned at the beginning of this section, the normalisation doesn t play a rôle in the it, ie sup sup log max P W w T w) log max P W w) w T with an analogous equality for the lower bound As P W w) exp hp) )) for all w T, the upper bound follows immediately and we need the corresponding lower bound on inf log max P W w) w T If g W > hp) +, there exists K N such that for all > K, log max w A P W w) > hp) + Then taing > K, max P W w) e hp)+) min p a w T max b A p b To prove this, we use proof by contradiction Assume max P W w) < e hp) ) min p a w T max b A p b Tae a word w arg max w T P W w), there exists at least one letter in w, b A, such that p b < max p a as P W w ) < max w A P W w) We then replace one occurrence of b in w with an element of arg max p a to mae the letter word w Then log P W w ) < log P W w ) As for each we have only changed one letter and by assumption, log P W w ) log P W w ) max ) b A p b min p a < hp) + This implies w T and contravenes our choice of w So inf log max P W w) hp) + w T if g W > hp) + Lastly if g W hp) +, max P W w) w T max w A P W w) if max w A P W w) < e hp) ) e hp) ) min p a otherwise max b A p b The result follows as hp) + + inf inf log min ) p a max b A p b log max P W w) hp) + w A

7 ACKNOWLEDGMENT MC and KD supported by the Science Foundation Ireland Grant No /PI/77 and the Irish Higher Educational Authority HEA) PRTLI Networ Mathematics Grant FdPC and MM sponsored by the Department of Defense under Air Force Contract FA C-0002 Opinions, interpretations, recommendations, and conclusions are those of the authors and are not necessarily endorsed by the United States Government Specifically, this wor was supported by Information Systems of ASDR&E) REFERENCES [] A Menezes, S Vanstone, and P V Oorschot, Handboo of Applied Cryptography CRC Press, Inc, 996 [2] T M Cover and J A Thomas, Elements of Information Theory John Wiley & Sons, 99 [3] J Pliam, On the incomparability of entropy and marginal guesswor in brute-force attacs, in INDOCRYPT, 2000, pp [4] S Draper, A Khisti, E Martinian, A Vetro, and J Yedidia, Secure storage of fingerprint biometrics using Slepian-Wolf codes, in ITA Worshop, 2007 [5] Y Sutcu, S Rane, J Yedidia, S Draper, and A Vetro, Feature extraction for a Slepian-Wolf biometric system using LDPC codes, in ISIT, 2008 [6] F du Pin Calmon, M Médard, L Zegler, J Barros, M Christiansen, and K Duffy, Lists that are smaller than their parts: A coding approach to tunable secrecy, in Proc 50 th Allerton Conference, 202 [7] E Arian, An inequality on guessing and its application to sequential decoding, IEEE Trans, Inf Theory, vol 42, no, pp 99 05, 996 [8] D Malone and W Sullivan, Guesswor and entropy, IEEE Trans Inf Theory, vol 50, no 4, pp , 2004, dwmalone/p/guess02pdf [9] C-E Pfister and W Sullivan, Rényi entropy, guesswor moments and large deviations, IEEE Trans Inf Theory, no, pp , 2004 [0] M K Hanawal and R Sundaresan, Guessing revisited: A large deviations approach, IEEE Trans Inf Theory, vol 57, no, pp 70 78, 20 [] M M Christiansen and K R Duffy, Guesswor, large deviations and Shannon entropy, IEEE Trans Inf Theory, vol 59, no 2, pp , 203 [2] J L Massey, Guessing and entropy, IEEE Int Symo Inf Theory, pp , 994 [3] A Dembo and O Zeitouni, Large Deviations Techniques and Applications Springer-Verlag, 998 [4] E Arian and N Merhav, Guessing subject to distortion, IEEE Trans Inf Theory, vol 44, pp , 998 [5] R Sundaresan, Guessing based on length functions, in Proc 2007 International Symp on Inf Th, 2007

Guesswork, large deviations and Shannon entropy

Guesswork, large deviations and Shannon entropy Guesswork, large deviations and Shannon entropy Mark M. Christiansen and Ken R. Duffy Abstract How hard is it to guess a password? Massey showed that a simple function of the Shannon entropy of the distribution

More information

Guesswork Subject to a Total Entropy Budget

Guesswork Subject to a Total Entropy Budget Guesswork Subject to a Total Entropy Budget Arman Rezaee, Ahmad Beirami, Ali Makhdoumi, Muriel Médard, and Ken Duffy arxiv:72.09082v [cs.it] 25 Dec 207 Abstract We consider an abstraction of computational

More information

Quantifying Computational Security Subject to Source Constraints, Guesswork and Inscrutability

Quantifying Computational Security Subject to Source Constraints, Guesswork and Inscrutability Quantifying Computational Security Subject to Source Constraints, Guesswork and Inscrutability Ahmad Beirami, Robert Calderbank, Ken Duffy, Muriel Médard Department of Electrical and Computer Engineering,

More information

Guesswork is not a substitute for Entropy

Guesswork is not a substitute for Entropy Guesswork is not a substitute for Entropy 1. Introduction *Dr. David Malone 1, Dr. Wayne Sullivan 2, 1 Hamilton Institute, NUI Maynooth, Ireland, Tel: (01) 708 6100 E-mail: david.malone@nuim.ie 2 Department

More information

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions

Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Password Cracking: The Effect of Bias on the Average Guesswork of Hash Functions Yair Yona, and Suhas Diggavi, Fellow, IEEE Abstract arxiv:608.0232v4 [cs.cr] Jan 207 In this work we analyze the average

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization

Why Botnets Work: Distributed Brute-Force Attacks Need No Synchronization Why Botnets Wor: Distributed Brute-Force Attacs Need No Synchronization Salman Salamatian, Wasim Huleihel, Ahmad Beirami, Asaf Cohen, Muriel Médard arxiv:805666v [csit] 29 May 208 Abstract In September

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4

More information

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime Asymptotic Filtering and Entropy Rate of a Hidden Marov Process in the Rare Transitions Regime Chandra Nair Dept. of Elect. Engg. Stanford University Stanford CA 94305, USA mchandra@stanford.edu Eri Ordentlich

More information

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem

Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Cases Where Finding the Minimum Entropy Coloring of a Characteristic Graph is a Polynomial Time Problem Soheil Feizi, Muriel Médard RLE at MIT Emails: {sfeizi,medard}@mit.edu Abstract In this paper, we

More information

Convexity/Concavity of Renyi Entropy and α-mutual Information

Convexity/Concavity of Renyi Entropy and α-mutual Information Convexity/Concavity of Renyi Entropy and -Mutual Information Siu-Wai Ho Institute for Telecommunications Research University of South Australia Adelaide, SA 5095, Australia Email: siuwai.ho@unisa.edu.au

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

On Scalable Coding in the Presence of Decoder Side Information

On Scalable Coding in the Presence of Decoder Side Information On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,

More information

Technical Details about the Expectation Maximization (EM) Algorithm

Technical Details about the Expectation Maximization (EM) Algorithm Technical Details about the Expectation Maximization (EM Algorithm Dawen Liang Columbia University dliang@ee.columbia.edu February 25, 2015 1 Introduction Maximum Lielihood Estimation (MLE is widely used

More information

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Shahid Mehraj Shah and Vinod Sharma Department of Electrical Communication Engineering, Indian Institute of

More information

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication

Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Arimoto-Rényi Conditional Entropy. and Bayesian M-ary Hypothesis Testing. Abstract

Arimoto-Rényi Conditional Entropy. and Bayesian M-ary Hypothesis Testing. Abstract Arimoto-Rényi Conditional Entropy and Bayesian M-ary Hypothesis Testing Igal Sason Sergio Verdú Abstract This paper gives upper and lower bounds on the minimum error probability of Bayesian M-ary hypothesis

More information

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets Shivaprasad Kotagiri and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame,

More information

Distributed Functional Compression through Graph Coloring

Distributed Functional Compression through Graph Coloring Distributed Functional Compression through Graph Coloring Vishal Doshi, Devavrat Shah, Muriel Médard, and Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Graph Coloring and Conditional Graph Entropy

Graph Coloring and Conditional Graph Entropy Graph Coloring and Conditional Graph Entropy Vishal Doshi, Devavrat Shah, Muriel Médard, Sidharth Jaggi Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information

Necessary and Sufficient Conditions for High-Dimensional Salient Feature Subset Recovery

Necessary and Sufficient Conditions for High-Dimensional Salient Feature Subset Recovery Necessary and Sufficient Conditions for High-Dimensional Salient Feature Subset Recovery Vincent Tan, Matt Johnson, Alan S. Willsky Stochastic Systems Group, Laboratory for Information and Decision Systems,

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

Subset Source Coding

Subset Source Coding Fifty-third Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 29 - October 2, 205 Subset Source Coding Ebrahim MolavianJazi and Aylin Yener Wireless Communications and Networking

More information

Robustness and duality of maximum entropy and exponential family distributions

Robustness and duality of maximum entropy and exponential family distributions Chapter 7 Robustness and duality of maximum entropy and exponential family distributions In this lecture, we continue our study of exponential families, but now we investigate their properties in somewhat

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Entropy power inequality for a family of discrete random variables

Entropy power inequality for a family of discrete random variables 20 IEEE International Symposium on Information Theory Proceedings Entropy power inequality for a family of discrete random variables Naresh Sharma, Smarajit Das and Siddharth Muthurishnan School of Technology

More information

Variable-Rate Universal Slepian-Wolf Coding with Feedback

Variable-Rate Universal Slepian-Wolf Coding with Feedback Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper

Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Information Leakage of Correlated Source Coded Sequences over a Channel with an Eavesdropper Reevana Balmahoon and Ling Cheng School of Electrical and Information Engineering University of the Witwatersrand

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Legendre-Fenchel transforms in a nutshell

Legendre-Fenchel transforms in a nutshell 1 2 3 Legendre-Fenchel transforms in a nutshell Hugo Touchette School of Mathematical Sciences, Queen Mary, University of London, London E1 4NS, UK Started: July 11, 2005; last compiled: August 14, 2007

More information

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016 Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006

5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 5218 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 12, DECEMBER 2006 Source Coding With Limited-Look-Ahead Side Information at the Decoder Tsachy Weissman, Member, IEEE, Abbas El Gamal, Fellow,

More information

1 Background on Information Theory

1 Background on Information Theory Review of the book Information Theory: Coding Theorems for Discrete Memoryless Systems by Imre Csiszár and János Körner Second Edition Cambridge University Press, 2011 ISBN:978-0-521-19681-9 Review by

More information

An approach from classical information theory to lower bounds for smooth codes

An approach from classical information theory to lower bounds for smooth codes An approach from classical information theory to lower bounds for smooth codes Abstract Let C : {0, 1} n {0, 1} m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c,

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

On Competitive Prediction and Its Relation to Rate-Distortion Theory

On Competitive Prediction and Its Relation to Rate-Distortion Theory IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 12, DECEMBER 2003 3185 On Competitive Prediction and Its Relation to Rate-Distortion Theory Tsachy Weissman, Member, IEEE, and Neri Merhav, Fellow,

More information

1 Introduction to information theory

1 Introduction to information theory 1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through

More information

The Method of Types and Its Application to Information Hiding

The Method of Types and Its Application to Information Hiding The Method of Types and Its Application to Information Hiding Pierre Moulin University of Illinois at Urbana-Champaign www.ifp.uiuc.edu/ moulin/talks/eusipco05-slides.pdf EUSIPCO Antalya, September 7,

More information

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers

On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers On The Binary Lossless Many-Help-One Problem with Independently Degraded Helpers Albrecht Wolf, Diana Cristina González, Meik Dörpinghaus, José Cândido Silveira Santos Filho, and Gerhard Fettweis Vodafone

More information

Remote Source Coding with Two-Sided Information

Remote Source Coding with Two-Sided Information Remote Source Coding with Two-Sided Information Basak Guler Ebrahim MolavianJazi Aylin Yener Wireless Communications and Networking Laboratory Department of Electrical Engineering The Pennsylvania State

More information

On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, and Michelle Effros, Fellow, IEEE

On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, and Michelle Effros, Fellow, IEEE 3284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 7, JULY 2009 On Lossless Coding With Coded Side Information Daniel Marco, Member, IEEE, Michelle Effros, Fellow, IEEE Abstract This paper considers

More information

Lecture 2: August 31

Lecture 2: August 31 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 2: August 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy

More information

Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing

Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing Perfect Omniscience, Perfect Secrecy and Steiner Tree Packing S. Nitinawarat and P. Narayan Department of Electrical and Computer Engineering and Institute for Systems Research University of Maryland College

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. Converting online to batch. Online convex optimization.

More information

Quiz 2 Date: Monday, November 21, 2016

Quiz 2 Date: Monday, November 21, 2016 10-704 Information Processing and Learning Fall 2016 Quiz 2 Date: Monday, November 21, 2016 Name: Andrew ID: Department: Guidelines: 1. PLEASE DO NOT TURN THIS PAGE UNTIL INSTRUCTED. 2. Write your name,

More information

Belief Propagation, Information Projections, and Dykstra s Algorithm

Belief Propagation, Information Projections, and Dykstra s Algorithm Belief Propagation, Information Projections, and Dykstra s Algorithm John MacLaren Walsh, PhD Department of Electrical and Computer Engineering Drexel University Philadelphia, PA jwalsh@ece.drexel.edu

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. 1. Prediction with expert advice. 2. With perfect

More information

Information-theoretic Secrecy A Cryptographic Perspective

Information-theoretic Secrecy A Cryptographic Perspective Information-theoretic Secrecy A Cryptographic Perspective Stefano Tessaro UC Santa Barbara WCS 2017 April 30, 2017 based on joint works with M. Bellare and A. Vardy Cryptography Computational assumptions

More information

Outline. Computer Science 418. Number of Keys in the Sum. More on Perfect Secrecy, One-Time Pad, Entropy. Mike Jacobson. Week 3

Outline. Computer Science 418. Number of Keys in the Sum. More on Perfect Secrecy, One-Time Pad, Entropy. Mike Jacobson. Week 3 Outline Computer Science 48 More on Perfect Secrecy, One-Time Pad, Mike Jacobson Department of Computer Science University of Calgary Week 3 2 3 Mike Jacobson (University of Calgary) Computer Science 48

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Optimum Binary-Constrained Homophonic Coding

Optimum Binary-Constrained Homophonic Coding Optimum Binary-Constrained Homophonic Coding Valdemar C. da Rocha Jr. and Cecilio Pimentel Communications Research Group - CODEC Department of Electronics and Systems, P.O. Box 7800 Federal University

More information

Convergence of generalized entropy minimizers in sequences of convex problems

Convergence of generalized entropy minimizers in sequences of convex problems Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences

More information

Secret Sharing. Qi Chen. December 14, 2015

Secret Sharing. Qi Chen. December 14, 2015 Secret Sharing Qi Chen December 14, 2015 What is secret sharing? A dealer: know the secret S and distribute the shares of S to each party A set of n parties P n {p 1,, p n }: each party owns a share Authorized

More information

Arimoto Channel Coding Converse and Rényi Divergence

Arimoto Channel Coding Converse and Rényi Divergence Arimoto Channel Coding Converse and Rényi Divergence Yury Polyanskiy and Sergio Verdú Abstract Arimoto proved a non-asymptotic upper bound on the probability of successful decoding achievable by any code

More information

Channel Dispersion and Moderate Deviations Limits for Memoryless Channels

Channel Dispersion and Moderate Deviations Limits for Memoryless Channels Channel Dispersion and Moderate Deviations Limits for Memoryless Channels Yury Polyanskiy and Sergio Verdú Abstract Recently, Altug and Wagner ] posed a question regarding the optimal behavior of the probability

More information

Legendre-Fenchel transforms in a nutshell

Legendre-Fenchel transforms in a nutshell 1 2 3 Legendre-Fenchel transforms in a nutshell Hugo Touchette School of Mathematical Sciences, Queen Mary, University of London, London E1 4NS, UK Started: July 11, 2005; last compiled: October 16, 2014

More information

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,

More information

arxiv: v4 [cs.it] 17 Oct 2015

arxiv: v4 [cs.it] 17 Oct 2015 Upper Bounds on the Relative Entropy and Rényi Divergence as a Function of Total Variation Distance for Finite Alphabets Igal Sason Department of Electrical Engineering Technion Israel Institute of Technology

More information

On the Distribution of the Subset Sum Pseudorandom Number Generator on Elliptic Curves

On the Distribution of the Subset Sum Pseudorandom Number Generator on Elliptic Curves On the Distribution of the Subset Sum Pseudorandom Number Generator on Elliptic Curves Simon R. Blacburn Department of Mathematics Royal Holloway University of London Egham, Surrey, TW20 0EX, UK s.blacburn@rhul.ac.u

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

Discrete Lyapunov Exponent and Resistance to Differential Cryptanalysis José María Amigó, Ljupco Kocarev, and Janusz Szczepanski

Discrete Lyapunov Exponent and Resistance to Differential Cryptanalysis José María Amigó, Ljupco Kocarev, and Janusz Szczepanski 882 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 54, NO. 10, OCTOBER 2007 Discrete Lyapunov Exponent and Resistance to Dferential Cryptanalysis José María Amigó, Ljupco Kocarev, and

More information

Consider the context of selecting an optimal system from among a finite set of competing systems, based

Consider the context of selecting an optimal system from among a finite set of competing systems, based INFORMS Journal on Computing Vol. 25, No. 3, Summer 23, pp. 527 542 ISSN 9-9856 print) ISSN 526-5528 online) http://dx.doi.org/.287/ijoc.2.59 23 INFORMS Optimal Sampling Laws for Stochastically Constrained

More information

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching

1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE Source Coding, Large Deviations, and Approximate Pattern Matching 1590 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 6, JUNE 2002 Source Coding, Large Deviations, and Approximate Pattern Matching Amir Dembo and Ioannis Kontoyiannis, Member, IEEE Invited Paper

More information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information 1 Conditional entropy Let (Ω, F, P) be a probability space, let X be a RV taking values in some finite set A. In this lecture

More information

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

SHARED INFORMATION. Prakash Narayan with. Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe SHARED INFORMATION Prakash Narayan with Imre Csiszár, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe 2/41 Outline Two-terminal model: Mutual information Operational meaning in: Channel coding: channel

More information

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani

The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 5, MAY 2009 2037 The Capacity of Finite Abelian Group Codes Over Symmetric Memoryless Channels Giacomo Como and Fabio Fagnani Abstract The capacity

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources

A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels with Correlated Sources Wei Kang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College

More information

NON-LINEAR COMPLEXITY OF THE NAOR REINGOLD PSEUDO-RANDOM FUNCTION

NON-LINEAR COMPLEXITY OF THE NAOR REINGOLD PSEUDO-RANDOM FUNCTION NON-LINEAR COMPLEXITY OF THE NAOR REINGOLD PSEUDO-RANDOM FUNCTION William D. Banks 1, Frances Griffin 2, Daniel Lieman 3, Igor E. Shparlinski 4 1 Department of Mathematics, University of Missouri Columbia,

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

Intro to Information Theory

Intro to Information Theory Intro to Information Theory Math Circle February 11, 2018 1. Random variables Let us review discrete random variables and some notation. A random variable X takes value a A with probability P (a) 0. Here

More information

Information measures in simple coding problems

Information measures in simple coding problems Part I Information measures in simple coding problems in this web service in this web service Source coding and hypothesis testing; information measures A(discrete)source is a sequence {X i } i= of random

More information

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel

Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Mismatched Multi-letter Successive Decoding for the Multiple-Access Channel Jonathan Scarlett University of Cambridge jms265@cam.ac.uk Alfonso Martinez Universitat Pompeu Fabra alfonso.martinez@ieee.org

More information

Fast Cryptanalysis of the Matsumoto-Imai Public Key Scheme

Fast Cryptanalysis of the Matsumoto-Imai Public Key Scheme Fast Cryptanalysis of the Matsumoto-Imai Public Key Scheme P. Delsarte Philips Research Laboratory, Avenue Van Becelaere, 2 B-1170 Brussels, Belgium Y. Desmedt Katholieke Universiteit Leuven, Laboratorium

More information

The memory centre IMUJ PREPRINT 2012/03. P. Spurek

The memory centre IMUJ PREPRINT 2012/03. P. Spurek The memory centre IMUJ PREPRINT 202/03 P. Spurek Faculty of Mathematics and Computer Science, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland J. Tabor Faculty of Mathematics and Computer

More information

Bounded Expected Delay in Arithmetic Coding

Bounded Expected Delay in Arithmetic Coding Bounded Expected Delay in Arithmetic Coding Ofer Shayevitz, Ram Zamir, and Meir Feder Tel Aviv University, Dept. of EE-Systems Tel Aviv 69978, Israel Email: {ofersha, zamir, meir }@eng.tau.ac.il arxiv:cs/0604106v1

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Simple Channel Coding Bounds

Simple Channel Coding Bounds ISIT 2009, Seoul, Korea, June 28 - July 3,2009 Simple Channel Coding Bounds Ligong Wang Signal and Information Processing Laboratory wang@isi.ee.ethz.ch Roger Colbeck Institute for Theoretical Physics,

More information

On Scalable Source Coding for Multiple Decoders with Side Information

On Scalable Source Coding for Multiple Decoders with Side Information On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

Sequential prediction with coded side information under logarithmic loss

Sequential prediction with coded side information under logarithmic loss under logarithmic loss Yanina Shkel Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Maxim Raginsky Department of Electrical and Computer Engineering Coordinated Science

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science Transmission of Information Spring 2006 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.44 Transmission of Information Spring 2006 Homework 2 Solution name username April 4, 2006 Reading: Chapter

More information

Data Compression. Limit of Information Compression. October, Examples of codes 1

Data Compression. Limit of Information Compression. October, Examples of codes 1 Data Compression Limit of Information Compression Radu Trîmbiţaş October, 202 Outline Contents Eamples of codes 2 Kraft Inequality 4 2. Kraft Inequality............................ 4 2.2 Kraft inequality

More information

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel

A Simple Memoryless Proof of the Capacity of the Exponential Server Timing Channel A Simple Memoryless Proof of the Capacity of the Exponential Server iming Channel odd P. Coleman ECE Department Coordinated Science Laboratory University of Illinois colemant@illinois.edu Abstract his

More information

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1)

Lecture 3. Mathematical methods in communication I. REMINDER. A. Convex Set. A set R is a convex set iff, x 1,x 2 R, θ, 0 θ 1, θx 1 + θx 2 R, (1) 3- Mathematical methods in communication Lecture 3 Lecturer: Haim Permuter Scribe: Yuval Carmel, Dima Khaykin, Ziv Goldfeld I. REMINDER A. Convex Set A set R is a convex set iff, x,x 2 R, θ, θ, θx + θx

More information

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel ETH, Zurich,

More information

Soft Covering with High Probability

Soft Covering with High Probability Soft Covering with High Probability Paul Cuff Princeton University arxiv:605.06396v [cs.it] 20 May 206 Abstract Wyner s soft-covering lemma is the central analysis step for achievability proofs of information

More information

Dispersion of the Gilbert-Elliott Channel

Dispersion of the Gilbert-Elliott Channel Dispersion of the Gilbert-Elliott Channel Yury Polyanskiy Email: ypolyans@princeton.edu H. Vincent Poor Email: poor@princeton.edu Sergio Verdú Email: verdu@princeton.edu Abstract Channel dispersion plays

More information

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels

Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Optimal Encoding Schemes for Several Classes of Discrete Degraded Broadcast Channels Bike Xie, Student Member, IEEE and Richard D. Wesel, Senior Member, IEEE Abstract arxiv:0811.4162v4 [cs.it] 8 May 2009

More information