ISSN Article

Size: px
Start display at page:

Download "ISSN Article"

Transcription

1 Entropy 0, 3, 595-6; doi:0.3390/e OPEN ACCESS entropy ISSN Article Entropy Measures vs. Kolmogorov Compleity Andreia Teieira,,, Armando Matos,3, André Souto, and Luís Antunes, Computer Science Department, Faculty of Sciences, University of Porto, Rua Campo Alegre 0/055, Porto, Portugal; s: (A.M.); (A.S.); (L.A.) Instituto de Telecomunicações, Av. Rovisco Pais, Lisboa, Portugal 3 Laboratório de Inteligência Artificial e Ciência de Computadores, Rua Campo Alegre 0/055, Porto, Portugal Author to whom correspondence should be addressed; andreiasofia@ncc.up.pt; Tel.: Received: 4 January 0; in revised form: 5 February 0 / Accepted: 6 February 0 / Published: 3 March 0 Abstract: Kolmogorov compleity and Shannon entropy are conceptually different measures. However, for any recursive probability distribution, the epected value of Kolmogorov compleity equals its Shannon entropy, up to a constant. We study if a similar relationship holds for Rényi and Tsallis entropies of order α, showing that it only holds for α. Regarding a time-bounded analogue relationship, we show that, for some distributions we have a similar result. We prove that, for universal time-bounded distribution m t (), Tsallis and Rényi entropies converge if and only if α is greater than. We also establish the uniform continuity of these entropies. Keywords: Kolmogorov compleity; Shannon entropy; Rényi entropy; Tsallis entropy. Introduction The Kolmogorov compleity K() measures the amount of information contained in an individual object (usually a string), by the size of the smallest program that generates it. It naturally characterizes a probability distribution over Σ (the set of all finite binary strings), assigning a probability of c K() for any string, where c is a constant. This probability distribution is called universal probability distribution and it is denoted by m.

2 Entropy 0, The Shannon entropy H(X) of a random variable X is a measure of its average uncertainty. It is the smallest number of bits required, on average, to describe, the output of the random variable X. Kolmogorov compleity and Shannon entropy are conceptually different, as the former is based on the length of programs and the later in probability distributions. However, for any recursive probability distribution (i.e., distributions that are computable by a Turing machine), the epected value of the Kolmogorov compleity equals the Shannon entropy, up to a constant term depending only on the distribution (see []). Several information measures or entropies have been introduced since Shannon s seminal paper []. We are interested in two different generalizations of Shannon entropy: Rényi entropy [3], an additive measure based on a specific form of mean of the elementary information gain: instead of using the arithmetic mean, Rényi used the Kolmogorov-Naguno mean associated with the function f() c b ( α) + c where c and c are constants, b is a real greater than and α is a non-negative parameter; Tsallis entropy [4], a non additive measure, often called a non-etensive measure, in which the probabilities are scaled by a positive power α, that may either reinforce the large (if α>) or the small (if α<) probabilities. Let R α (P ) and T α (P ) denote, respectively, the Rényi and Tsallis entropies associated with the probability distribution P. Both are continuous functions of the parameter α and both are (quite different) generalizations of the Shannon entropy, in the sense that R (P )T (P )H(P ) (see [5]). It is well known (see [6]) that for any recursive probability distribution P over Σ, the average value of K() and the Shannon entropy H(P ) are close, in the sense that 0 P ()K() H(P ) K(P ) () where K(P ) is the length of the shortest program that describes the distribution P. We study if this property also holds for Rényi and Tsallis entropies. The answer is no. If we replace H by R or by T, the inequalities () are no longer true (unless α ). We also analyze the validity of the relationship (), replacing Kolmogorov compleity by its time-bounded version, proving that it holds for distributions such that the cumulative probability distribution is computable in an allotted time. So, for these distributions, Shannon entropy equals the epected value of the time-bounded Kolmogorov compleity. We also study the convergence of Tsallis and Rényi entropies of the universal time-bounded distribution m t () c Kt (), proving that both entropies converge if and only if α>. Finally, we prove the uniform continuity of Tsallis, Rényi and Shannon entropies. The rest of this paper is organized as follows. In the net section, we present the notions and results that will be used. In Section 3, we study if the inequalities () can be generalized for Rényi and Tsallis entropies and we also establish a similar relationship for the time-bounded Kolmogorov compleity. In Section 4, we analyze the entropies of the universal time-bounded distribution. In Section 5, we establish the uniform continuity of the entropies, a mathematical result that may be useful in further research.

3 Entropy 0, Preliminaries Σ {0, } is the set of all finite binary strings. The empty string is denoted by ɛ. Σ n is the set of strings of length n and denotes the length of the string. Strings are leicographically ordered. The logarithm of in base is denoted by log(). The real interval between a and b, including a and ecluding b is represented by [a, b). A sequence of real numbers r n is denoted by (r n ) n N... Kolmogorov Compleity Kolmogorov compleity was introduced independently, in the 960 s by Solomonoff [7], Kolmogorov [8], and Chaitin [9]. Only essential definitions and basic results are given here, for further details see []. The model of computation used is the prefi-free Turing machine, i.e., Turing machines with a prefi-free domain. A set of strings A is prefi-free if no string in A is prefi of another string of A. Kraft s inequality guarantees that for any prefi-free set A, A. In this work all resource bounds t considered are time constructible, i.e., there is a Turing machine whose running time is eactly t(n) on every input of size n. Definition.. Kolmogorov compleity. Let U be a fied prefi-free universal Turing machine. For any two strings, y Σ, the Kolmogorov compleity of given y is K( y) min p { p : U(p, y) }, where U(p, y) is the output of the program p with auiliary input y when it is run in the machine U. For any time constructible t, the t-time-bounded Kolmogorov compleity of given y is, K t ( y) min p { p : U(p, y) in at most t( ) steps}. The default value for y, the auiliary input is the empty string ɛ; for simplicity, we denote K( ɛ) and K t ( ɛ) by K() and K t (), respectively. The choice of the universal Turing machine affects the running time of a program by, at most, a logarithmic factor and the program length by, at most, a constant number of etra bits. Definition.. Let c be a non-negative integer. K() n c. We say that Σ n is c-kolmogorov random if Proposition.3. For all strings, y Σ, we have: - K() K t ()+O() +log( )+O(); - K( y) K()+O() and K t ( y) K t ()+O(); - There are at least n ( c ) c-kolmogorov random strings of length n. As Solovay [0] observed, for infinitely many, the time-bounded version of Kolmogorov compleity equals the unbounded version. Formally, we have: Theorem.4. (Solovay [0]) For all time-bounds t(n) n + O() we have K t () K()+O() for infinitely many. As a consequence of this result, there is a string of arbitrarily large Kolmogorov compleity such that K t () K()+O().

4 Entropy 0, Definition.5. A semi-measure over a discrete set X is a function f : X [0, ] such that, f(). We say that a semi-measure is a measure if the equality holds. A semi-measure is X constructive if it is semi-computable from below, i.e., for each, there is a Turing machine that produces a monotone increasing sequence of rationals converging to f(). An important constructive semi-measure based on Kolmogorov compleity is defined by m() K(). This semi-measure dominates any other constructive semi-measure μ (see [,]), in the sense that there is a constant c μ K(μ) such that, for all, m() c μ μ(). For this reason, this semi-measure is called universal. Since it is natural to consider time-bounds on Kolmogorov compleity, we can define m t (), a time-bounded version of m(). Definition.6. We say that a function f is computable in time t if there is a Turing machine that on the input computes the output f(), in eactly t( ) steps. Definition.7. The t-time-bounded universal distribution is m t () c Kt (), where c is a real number such that Σ m t (). In [], the authors proved that m t dominates distributions computable in time t, where t is a time-bound that only depends on t. Formally: Theorem.8. (Claim 7.6. in []) If μ, the cumulative probability distribution of μ, is computable in time t(n) then for all Σ, m t () Kt (μ ) μ(), where t (n) nt(n)log(nt(n))... Entropies Information Theory was introduced in 948 by C.E. Shannon []. Shannon entropy quantifies the uncertainty of the results of an eperiment; it quantifies the average number of bits necessary to describe an outcome from an event. Definition.9. (Shannon entropy []) Let X be a finite or infinitely countable set and let X be a random variable taking values in X with distribution P. The Shannon entropy of the random variable X is H(X) X P ()logp () () Definition.0. (Rényi entropy [3]) Let X be a finite or infinitely countable set and let X be a random variable taking values in X with distribution P and let α be a non-negative real number. The Rényi entropy of order α of the random variable X is R α (X) log ( ) P () α Definition.. (Tsallis entropy [4]) Let X be a finite or infinitely countable set and let X be a random variable taking values in X with distribution P and let α be a non-negative real number. The Tsallis entropy of order α of the random variable X is T α (X) α X ( ) P () α X (3) (4)

5 Entropy 0, It is easy to prove that lim R α(x) lim T α (X) H(X) (5) α α Given the conceptual differences in the definitions of Kolmogorov compleity and Shannon entropy, it is interesting that under some weak restrictions on the distribution of the strings, they are related, in the sense that the value of Shannon entropy equals the epected value of Kolmogorov compleity, up to a constant term that only depends on the distribution. Theorem.. Let P () be a recursive probability distribution such that H(P ) <. Then, 0 P ()K() H(P ) K(P ) (6) Proof. (Sketch, see [] for details). The first inequality follows directly from the Noiseless Coding Theorem, that, for these distributions, states H(P ) P ()K(). Since m is universal, m() K(P ) P (), for all, which is equivalent to log P () K(P ) K(). Thus, we have: P ()K() H(P ) ( ) P ()(K()+logP()) (7) ( ) P ()(K()+K(P) K()) (8) K(P ) (9) 3. Kolmogorov Compleity and Entropy: How Close? Since Rényi and Tsallis entropies are generalizations of Shannon entropy, we now study if Theorem. can be generalized for these entropies. Then, we prove that for distributions such that the cumulative probability distribution is computable in time t(n), Shannon entropy equals the epected value of the t-time-bounded Kolmogorov compleity. First, we observe that the interval [0,K(P)] of the inequalities of Theorem. is tight up to a constant term that only depends on the universal Turing machine chosen as reference. A similar study has been included in [] and in [3]. The following eamples illustrate the tightness of this interval. We present a probability distribution that satisfies: P ()K() H(P )K(P) O(), with K(P ) n (0) and a probability distribution that satisfies: P ()K() H(P )O() and K(P ) n. () Eample 3.. Fi 0 Σ n. Consider the distribution concentrated in 0, i.e., { if 0 P n () 0 otherwise

6 Entropy 0, Notice that describing this distribution is equivalent to describing 0. So, K(P n )K( 0 )+O(). On the other hand, P n ()K() H(P n )K( 0 ). Thus, if 0 is c-kolmogorov random, i.e., K( 0 ) n c, then K(P n ) n and P n ()K() H(P n ) n. Eample 3.. Let y be a string of length n that is c-kolmogorov random, i.e., K(y) n c and consider the following probability distribution over Σ : 0.y if 0 P n () 0.y if 0 otherwise where 0.y represents the real number between 0 and which binary representation is y. Notice that we can choose 0 and such that K( 0 )K( ) c, where c is a constant greater than and hence does not depend on n. Thus,. K(P n ) n c, since describing P n is essentially equivalent to describe 0, and y;. P n ()K() (0.y)K( 0 )+( 0.y)K( ) 0.y c +( 0.y) c c ; 3. H(P n ) 0.y log 0.y ( 0.y)log( 0.y). Thus, 0 P n ()K() H(P n ) c << K (P n ) () and K(P n ) n c (3) Now we address the question if an analogue of Theorem. holds for Rényi and Tsallis entropies. We show that the Shannon entropy is the only entropy that verifies simultaneously both inequalities of Theorem. and thus is the only one suitable to deal with information. For every ε>0, 0 <ε <, and any probability distribution P, with finite support, (see [5]), we have: Thus, R +ε (P ) H(P ) R ε (P ) (4). For α, 0 P ()K() R α (P );. For α, P ()K() R α (P ) K(P ). It is known that for a given probability distribution with finite support, the Rényi and Tsallis entropies are monotonic increasing functions one of each other with respect to α (see [4]). Thus, for every ε>0 and 0 <ε <, we also have a similar relationship for the Tsallis entropy, i.e., T +ε (P ) H(P ) T ε (P ) (5)

7 Entropy 0, 3 60 Hence, it follows that:. For α, 0 P ()K() T α (P );. For α, P ()K() T α (P ) K(P ). In the net result we show that the inequalities above are, in general, false for different values of α. Proposition 3.3. There are recursive probability distributions P such that:. P ()K() R α (P ) >K(P ), where α>;. P ()K() R α (P ) < 0, where α<; 3. P ()K() T α (P ) >K(P ), where α>; 4. P ()K() T α (P ) < 0, where α<. Proof. For Σ n, consider the following probability distribution: / if 0 n P n () n if, {0, } n 0 otherwise It is clear that this distribution is recursive. We use this distribution for some specific n to prove all items.. First observe that: H(P n ) P n ()logp n () (6) ( log + n n log ) (7) ( n ) n n n (8) n + (9) Notice also that to describe P n it is sufficient to give n, sok(p n ) c log n, where c is a real number. By Theorem., wehave, which implies that: P n ()K() H(P n ) 0 (0) P n ()K() n + ()

8 Entropy 0, 3 60 On the other hand, by definition: R α (P n ) log P n () α () ( log α +n ) (3) nα ( log( (n )α + n ) nα ) (4) To prove that P n ()K() R α (P n ) >K(P n ), it is sufficiently to prove that: lim n ( ) P n ()K() R α (P n ) K(P n ) > 0 (5) i.e., the limit of the following epression is bigger than 0: But, n + ( log( (n )α + n ) nα ) c log n (6) ( n + lim log((n )α + n ) + nα ) n c log n (7) ( n + lim + log((n )α ) nα ) n α α c log n (8) ( n + lim α ) n α c log n + (9). To prove this item we use the other inequality of Theorem.: P n ()K() H(P n ) K(P n ) (30) which implies that So, P n ()K() R α (P n ) n + P n ()K() n + n + + c log n (3) + c log n ( log( (n )α + n ) nα ) (3) + c log n log(n ) + nα n + + c log n + Thus, taking n sufficiently large, the conclusion follows. (33) (34)

9 Entropy 0, The Tsallis entropy of order α of distribution P n is T α (P n ) Using the inequality, weget α α α α P n () α (35) + ) α n nα ( α α (α ) n( α) (α ) (36) (37) P n ()K() T α (P n ) n + P n ()K() α α (α ) + n( α) (α ) α α (α ) + n( α) (α ) (38) (39) Since α>, for n sufficiently large, n + α α (α ) + n( α) (α ) n + α (40) α (α ) > clog n K(P n ) (4) 4. Using the inequality 3, weget c P n ()K() T α (P n ) n + + c log n α α (α ) + n( α) (α ) (4) Since α<, for n sufficiently large, we conclude that: n + + c log n + α α () n( α) () < 0 (43) We generalize this result, obtaining the following theorem which proof is similar to the previous proposition. Theorem 3.4. For every Δ > 0 and α> there are recursive probability distributions P such that,. P ()K() R α (P ) (K(P )) α ;. P ()K() R α (P ) K(P )+Δ; 3. P ()K() T α (P ) (K(P )) α ; 4. P ()K() T α (P ) K(P )+Δ.

10 Entropy 0, The previous results show that only the Shannon entropy satisfies the inequalities of Theorem.. Now, we analyze if a similar relationship holds in a time-bounded Kolmogorov compleity scenario. If, instead of considering K(P ) and K() in the inequalities of Theorem., we use their time-bounded version and impose some computational restrictions on the distributions, we obtain a similar result. Notice that, for the class of distributions mentioned on the following theorem, the entropy equals (up to a constant) the epected value of time-bounded Kolmogorov compleity. Theorem 3.5. Let P be a probability distribution over Σ n such that P, the cumulative probability distribution of P, is computable in time t(n). Setting t (n) O ( nt(n)log(nt(n)) ), we have, 0 P ()K t () H(P ) K t (P ) (44) Proof. The first inequality follows directly from the similar inequality of Theorem. and from the fact that K t () K(). From Theorem.8,ifP is a probability distribution such that P is computable in time t(n), then for all Σ n and t (n) nt(n)log(nt(n)), K t () +logp () K t (P ). Then, summing over all, we get: P ()(K t ()+logp()) P ()K t (P ) (45) which is equivalent to P ()K t () H(P ) K t (P ). 4. On the Entropies of the Time-Bounded Universal Distribution If the time that a program can use to produce a string is bounded, we get the so called time-bounded universal distribution, m t () c Kt (). In this section, we study the convergence of the three entropies with respect to this distribution. Theorem 4.. The Shannon entropy of the distribution m t diverges. Proof. If then f() is a decreasing function. Let A be the set of strings such that log m t () ; this set is recursively enumerable. H(m t ) Σ m t ()logm t () (46) A m t ()logm t () (47) A c Kt () (K t () log c) (48) c log c Kt () + c K t () Kt () A A (49)

11 Entropy 0, So, if we prove that K t () Kt () diverges, the result follows. Assume, by contradiction, that A K t () Kt () <dfor some d R. Then, considering A r() d Kt () Kt () if s A 0 otherwise we conclude that r is a semi-measure. Thus, there is a constant c such that, for all, r() c m(). Hence, for A, wehave (50) d Kt () Kt () c K() (5) So, K t () c d Kt () K(), which is a contradiction since by Theorem.4, A contains infinitely many strings of time-bounded Kolmogorov compleity larger than a constant such that K t () K()+O(). The contradiction follows from the assumption that the sum K t () Kt () A converges. So, H(m t ) diverges. Now we show that, similarly to the behavior of Rényi and Tsallis entropies of universal distribution m (see [5]), we have R α (m t ) < iff α>and T α (m t ) < iff α>. First observe that:. If α>, T α (P ) α + R α(p );. If α<, T α (P ) α + R α(p ). Theorem 4.. The Tsallis entropy of order α of time-bounded universal distribution m t converges iff α>. Proof. From Theorem 8 of [5], we have that Σ (m()) α converges if α>. Since m t is a probability distribution, there is a constant λ such that, for all, m t () λm(). So, (m t ()) α (λm()) α, which implies that (m t ()) α λ α (m()) α (5) Σ Σ from which we conclude that for α>, T α (m t ) converges. For α<, the proof is analogous to the proof of Theorem 4.. Suppose that Σ (m t ()) α <dfor some d R. Hence, r() d (mt ()) α is a constructible semi-measure. Then, there is a constant τ such that for all Σ, r() ( ) α c Kt () τ K() (53) d which is equivalent to c α dτ αkt () K() (54)

12 Entropy 0, By Theorem.4, there are infinitely many strings such that K t () K() +O(). Then it would follow that for these strings cα (α )K(), which is false since these particular strings can have dτ arbitrarily large Kolmogorov compleity. Theorem 4.3. The Rényi entropy of order α of time-bounded universal distribution m t converges iff α>. Proof. For α>, since Kt () < +, wehave ( Kt () ) α <. Thus, R α (m t ) converges. For α <, suppose without loss of generality that α is rational (otherwise take another rational slightly larger than α). Assume that ( Kt () ) α <. Then by universality of m and since ( Kt () ) α is computable, we would have m() ( Kt () ) α which, by taking logarithms, is equivalent to K t () K()+O(). Since (/α) >, this would contradict Solovay s Theorem of page 597. α 5. Uniform Continuity of the Entropies Shannon, Rényi, and Tsallis entropies are very useful measures in Information Theory and Physics. In this section we look to these entropies as functions of the corresponding probability distribution, and establish an important property: all the three entropies are uniformly continuous. We think that this mathematical result may be useful in the future research on this topic. It should be mentioned that the uniform continuity of the Tsallis entropy for certain values of α has already been proved in [6]. 5.. Tsallis Entropy In order to prove the uniform continuity of the Tsallis entropy, we need some technical lemmas. Lemma 5.. Let 0 b<a and 0 <α<. We have, (a b) α a α b α (55) Proof. Consider the function f() α for 0 <α< and 0. So, f () α α is positive for all 0 and f () α(α ) α is negative. We want to prove that: f(a b) >f(a) f(b) (56) If we shift to the left a and b, { a a e b b e (57) it is easy to show that the value of f(a ) f(b ) (for e 0) is greater than f(a) f(b). In particular, if e b, f(a ) f(b )f(a b) 0f(a b) >f(a) f(b) (58) which means that (a b) α >a α b α.

13 Entropy 0, Lemma 5.. Let 0 b<a, and α>. We have, a α b α α(a b) (59) Proof. Consider the function f() α for α> and 0. Let 0 a, 0 b. The function f is continuous in [a, b] and differentiable in (a, b). So, by Lagrange s Theorem, c (a, b) :a α b α f (c)(a b) (60) Note that f (c) αc α <α. Thus, c(a b) <α(a b). Theorem 5.3. Let P and Q be two probability distributions over Σ n. Let T α (P ) and T α (Q) be the Tsallis entropy of distributions P and Q, respectively, where α>0, α. Then, ε >0, δ >0, P, Q : ma P () Q() δ T α (P ) T α (Q) ε (6) Proof. If α<, wehave: T α (P ) T α (Q) α + P () α α Q() α (6) P () α Q() α (63) (P () α Q() α ) (64) P () Q() α, by Lemma 5. (65) δ α n δ α (66) Thus, it is sufficient to consider δ α ()ε n. If α>, wehave: T α (P ) T α (Q) Thus, it is sufficient to consider δ α + P () α α Q() α (67) P () α Q() α (68) α (P () α Q() α ) (69) α α P () Q(), by Lemma 5. (70) α αδ αn α α δ (7) (α ) α n ε.

14 Entropy 0, Shannon Entropy Lemma 5.4. The function log is uniformly continuous in [0, ]. Proof. The function is continuous in [0, ] and this is a compact set, so the function is uniformly continuous. Theorem 5.5. Let P and Q be two probability distributions over Σ n. Let H(P ) and H(Q) be the Shannon entropy of distributions P and Q, respectively. Then, ε >0, δ >0, P, Q : ma P () Q() δ H(P ) H(Q) ε (7) Proof. By Lemma 5.4, we have that: γ >0, β >0,, y : ma y β log y log y γ (73) So, H(P ) H(Q) P ()logp()+ Q()logQ() (P ()logp() Q()logQ()) (74) (75) P ()logp () Q()logQ() (76) γ n γ (77) It is sufficient to consider γ ε and consider δ β. n 5.3. Rényi Entropy Lemma 5.6. Let a and b be two real numbers greater than. Then, log a log b a b (78) Proof. Consider the function f() log and. Let a, b. The function f is continuous in [a, b] and differentiable in (a, b). So, by Lagrange s Theorem, c (a, b) : log a log b f (c) a b (79) Note that f (c) <. Thus, c a b < a b. ln c Lemma 5.7. The function α is uniformly continuous in [0, ]. Thus, γ >0 β >0, y : ma y β α y α γ (80)

15 Entropy 0, Proof. The function is continuous in [0, ] and this is a compact set, so the function is uniformly continuous. Theorem 5.8. Let P and Q be two probability distributions over Σ n. Let R α (P ) and R α (Q) be the Rényi entropy of distributions P and Q respectively, where α>0, α. Then, ε >0, δ >0, P, Q : ma P () Q() δ R α (P ) R α (Q) ε (8) Proof. If α<: ε() Consider γ. By Lemma 5.7, we know that there is β such that if ma y β then n+ α y α γ. So, consider δ β. We have to show that if ma P () Q() δ then R α (P ) R α (Q) ε: R α (P ) R α (Q) log P () α log Q() α (8) log P () α log Q() α (83) (P () α Q() α ), by Lemma 5.6 (84) P () α Q() α (85) ε(), by Lemma 5.7 (86) n+ If α>: One of the P () is at least / n so that that ma P () Q() δ. Wehave log P () α log But, by Lemma 5. we have P () α Q() α Thus, log P () α log We get n+ ε() ε (87) n+ P () α / αn and d d log ln αn ln ; assume Q() α αn ln P () α Q() α (88) P () α Q() α n δα (89) Q() α αn(α+) δ ln (90) R α (p) R α (q) αn(α+) ln (α ) δ (9)

16 Entropy 0, 3 60 It is sufficient to consider δ ln (α ) α n(α+) ε. 6. Conclusions We proved that, among the three entropies we have studied, Shannon entropy is the only one that satisfies the relationship with the epected value of Kolmogorov compleity stated in [], by ehibiting a probability distribution for which the relationship fails for some values of α of Tsallis and Rényi entropies. Furthermore, under the assumption that cumulative probability distribution is computable in an allotted time, a time-bounded version of the same relationship holds for the Shannon entropy. Since it is natural to define a probability distribution based on time-bounded Kolmogorov compleity, we studied the convergence of this distribution under Shannon entropy and its two generalizations: Tsallis and Rényi entropies. We also proved that the three entropies considered in this work are uniformly continuous. Acknowledgements This work was partially supported by the project CSI (PTDC/EIA- CCO/09995/008), the grants SFRH/BD/3334/007 and SFRH/BD/849/006 from Fundação para a Ciência e Tecnologia (FCT) and funds granted to Instituto de Telecomunicações and Laboratório de Inteligência Artificial e Ciência de Computadores through the Programa de Financiamento Plurianual. References. Li, M.; Vitányi, P. An Introduction to Kolmogorov Compleity and Its Applications, 3rd ed.; Springer Publishing Company, Incorporated: New York, NY, USA, Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 948, , Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Statistical Laboratory of the University of California, Berkeley, CA, USA, 0 June 30 July, 960; pp Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 988, 5, Cachin, C. Entropy measures and unconditional security in cryptography, st ed.; In ETH Series in Information Security and Cryptography; Hartung-Gorre Verlag: Konstanz, Germany, Cover, T.; Gacs, P.; Gray, M. Kolmogorov s contributions to information theory and algorithmic compleity. Ann. Probab. 989, 7, Solomonoff, R. A formal theory of inductive inference; Part I. Inform. Contr. 964, 7,. 8. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inform. Transm. 965,, Chaitin, G.J. On the length of programs for computing finite binary sequences. J. ACM 966, 3, Solovay, R.M. Draft of a paper (or series of papers) on Chaitin s work. IBM Thomas J. Watson Research Center, Yorktown Heights, New York, NY, USA. Unpublished manuscript, 975.

17 Entropy 0, 3 6. Levin, L. Laws of information conservation (non-growth) and aspects of the foundation of probability theory. Prob. Peredachi Inf. 974, 0, Gacs, P. On the symmetry of algorithmic information. Soviet Mathematics Doklady 974, 5, Grunwald, P.; Vitanyi, P. Shannon information and Kolmogorov compleity. arxiv 004, arxiv:cs/04000v. 4. Ramshaw, J. Thermodynamic stability conditions for the Tsallis and Rényi entropies. Phys. Lett. A 995, 98, Tadaki, K. The Tsallis entropy and the Shannon entropy of a universal probability. In Proceedings of the International Symposium on Information Theory, Toronto, Canada, 6 July 008; abs/ , pp Zhang, Z. Uniform estimates on the Tsallis entropies. Lett. Math. Phys. 007, 80, 7 8. c 0 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

Sophistication as Randomness Deficiency

Sophistication as Randomness Deficiency Sophistication as Randomness Deficiency Francisco Mota 1,2, Scott Aaronson 4, Luís Antunes 1,2, and André Souto 1,3 1 Security and Quantum Information Group at Instituto de Telecomunicações < fmota@fmota.eu,

More information

Sophistication Revisited

Sophistication Revisited Sophistication Revisited Luís Antunes Lance Fortnow August 30, 007 Abstract Kolmogorov complexity measures the ammount of information in a string as the size of the shortest program that computes the string.

More information

Randomness and Recursive Enumerability

Randomness and Recursive Enumerability Randomness and Recursive Enumerability Theodore A. Slaman University of California, Berkeley Berkeley, CA 94720-3840 USA slaman@math.berkeley.edu Abstract One recursively enumerable real α dominates another

More information

Chaitin Ω Numbers and Halting Problems

Chaitin Ω Numbers and Halting Problems Chaitin Ω Numbers and Halting Problems Kohtaro Tadaki Research and Development Initiative, Chuo University CREST, JST 1 13 27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan E-mail: tadaki@kc.chuo-u.ac.jp Abstract.

More information

Introduction to Kolmogorov Complexity

Introduction to Kolmogorov Complexity Introduction to Kolmogorov Complexity Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU Marcus Hutter - 2 - Introduction to Kolmogorov Complexity Abstract In this talk I will give

More information

ISSN Article. Tsallis Entropy, Escort Probability and the Incomplete Information Theory

ISSN Article. Tsallis Entropy, Escort Probability and the Incomplete Information Theory Entropy 2010, 12, 2497-2503; doi:10.3390/e12122497 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Tsallis Entropy, Escort Probability and the Incomplete Information Theory Amir

More information

A statistical mechanical interpretation of algorithmic information theory

A statistical mechanical interpretation of algorithmic information theory A statistical mechanical interpretation of algorithmic information theory Kohtaro Tadaki Research and Development Initiative, Chuo University 1 13 27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan. E-mail: tadaki@kc.chuo-u.ac.jp

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Program size complexity for possibly infinite computations

Program size complexity for possibly infinite computations Program size complexity for possibly infinite computations Verónica Becher Santiago Figueira André Nies Silvana Picchi Abstract We define a program size complexity function H as a variant of the prefix-free

More information

Received: 20 December 2011; in revised form: 4 February 2012 / Accepted: 7 February 2012 / Published: 2 March 2012

Received: 20 December 2011; in revised form: 4 February 2012 / Accepted: 7 February 2012 / Published: 2 March 2012 Entropy 2012, 14, 480-490; doi:10.3390/e14030480 Article OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Interval Entropy and Informative Distance Fakhroddin Misagh 1, * and Gholamhossein

More information

Data Compression. Limit of Information Compression. October, Examples of codes 1

Data Compression. Limit of Information Compression. October, Examples of codes 1 Data Compression Limit of Information Compression Radu Trîmbiţaş October, 202 Outline Contents Eamples of codes 2 Kraft Inequality 4 2. Kraft Inequality............................ 4 2.2 Kraft inequality

More information

Low-Depth Witnesses are Easy to Find

Low-Depth Witnesses are Easy to Find Low-Depth Witnesses are Easy to Find Luis Antunes U. Porto Lance Fortnow U. Chicago Alexandre Pinto U. Porto André Souto U. Porto Abstract Antunes, Fortnow, van Melkebeek and Vinodchandran captured the

More information

The Optimal Fix-Free Code for Anti-Uniform Sources

The Optimal Fix-Free Code for Anti-Uniform Sources Entropy 2015, 17, 1379-1386; doi:10.3390/e17031379 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article The Optimal Fix-Free Code for Anti-Uniform Sources Ali Zaghian 1, Adel Aghajan

More information

CISC 876: Kolmogorov Complexity

CISC 876: Kolmogorov Complexity March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline

More information

Kolmogorov complexity

Kolmogorov complexity Kolmogorov complexity In this section we study how we can define the amount of information in a bitstring. Consider the following strings: 00000000000000000000000000000000000 0000000000000000000000000000000000000000

More information

Is there an Elegant Universal Theory of Prediction?

Is there an Elegant Universal Theory of Prediction? Is there an Elegant Universal Theory of Prediction? Shane Legg Dalle Molle Institute for Artificial Intelligence Manno-Lugano Switzerland 17th International Conference on Algorithmic Learning Theory Is

More information

Using Information Theory to Study Efficiency and Capacity of Computers and Similar Devices

Using Information Theory to Study Efficiency and Capacity of Computers and Similar Devices Information 2010, 1, 3-12; doi:10.3390/info1010003 OPEN ACCESS information ISSN 2078-2489 www.mdpi.com/journal/information Article Using Information Theory to Study Efficiency and Capacity of Computers

More information

Random Reals à la Chaitin with or without prefix-freeness

Random Reals à la Chaitin with or without prefix-freeness Random Reals à la Chaitin with or without prefix-freeness Verónica Becher Departamento de Computación, FCEyN Universidad de Buenos Aires - CONICET Argentina vbecher@dc.uba.ar Serge Grigorieff LIAFA, Université

More information

Cone Avoidance of Some Turing Degrees

Cone Avoidance of Some Turing Degrees Journal of Mathematics Research; Vol. 9, No. 4; August 2017 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Cone Avoidance of Some Turing Degrees Patrizio Cintioli

More information

Received: 7 September 2017; Accepted: 22 October 2017; Published: 25 October 2017

Received: 7 September 2017; Accepted: 22 October 2017; Published: 25 October 2017 entropy Communication A New Definition of t-entropy for Transfer Operators Victor I. Bakhtin 1,2, * and Andrei V. Lebedev 2,3 1 Faculty of Mathematics, Informatics and Landscape Architecture, John Paul

More information

Kolmogorov complexity and its applications

Kolmogorov complexity and its applications CS860, Winter, 2010 Kolmogorov complexity and its applications Ming Li School of Computer Science University of Waterloo http://www.cs.uwaterloo.ca/~mli/cs860.html We live in an information society. Information

More information

Randomness, probabilities and machines

Randomness, probabilities and machines 1/20 Randomness, probabilities and machines by George Barmpalias and David Dowe Chinese Academy of Sciences - Monash University CCR 2015, Heildeberg 2/20 Concrete examples of random numbers? Chaitin (1975)

More information

Limitations of Efficient Reducibility to the Kolmogorov Random Strings

Limitations of Efficient Reducibility to the Kolmogorov Random Strings Limitations of Efficient Reducibility to the Kolmogorov Random Strings John M. HITCHCOCK 1 Department of Computer Science, University of Wyoming Abstract. We show the following results for polynomial-time

More information

Symmetry of Information: A Closer Look

Symmetry of Information: A Closer Look Symmetry of Information: A Closer Look Marius Zimand. Department of Computer and Information Sciences, Towson University, Baltimore, MD, USA Abstract. Symmetry of information establishes a relation between

More information

Counting dependent and independent strings

Counting dependent and independent strings Counting dependent and independent strings Marius Zimand Department of Computer and Information Sciences, Towson University, Baltimore, MD, USA Abstract. We derive quantitative results regarding sets of

More information

Kolmogorov-Loveland Randomness and Stochasticity

Kolmogorov-Loveland Randomness and Stochasticity Kolmogorov-Loveland Randomness and Stochasticity Wolfgang Merkle 1 Joseph Miller 2 André Nies 3 Jan Reimann 1 Frank Stephan 4 1 Institut für Informatik, Universität Heidelberg 2 Department of Mathematics,

More information

Kolmogorov Complexity in Randomness Extraction

Kolmogorov Complexity in Randomness Extraction LIPIcs Leibniz International Proceedings in Informatics Kolmogorov Complexity in Randomness Extraction John M. Hitchcock, A. Pavan 2, N. V. Vinodchandran 3 Department of Computer Science, University of

More information

On the Entropy of a Two Step Random Fibonacci Substitution

On the Entropy of a Two Step Random Fibonacci Substitution Entropy 203, 5, 332-3324; doi:0.3390/e509332 Article OPEN ACCESS entropy ISSN 099-4300 www.mdpi.com/journal/entropy On the Entropy of a Two Step Random Fibonacci Substitution Johan Nilsson Department of

More information

Kolmogorov complexity and its applications

Kolmogorov complexity and its applications Spring, 2009 Kolmogorov complexity and its applications Paul Vitanyi Computer Science University of Amsterdam http://www.cwi.nl/~paulv/course-kc We live in an information society. Information science is

More information

Compression Complexity

Compression Complexity Compression Complexity Stephen Fenner University of South Carolina Lance Fortnow Georgia Institute of Technology February 15, 2017 Abstract The Kolmogorov complexity of x, denoted C(x), is the length of

More information

On Extensions over Semigroups and Applications

On Extensions over Semigroups and Applications entropy Article On Extensions over Semigroups and Applications Wen Huang, Lei Jin and Xiangdong Ye * Department of Mathematics, University of Science and Technology of China, Hefei 230026, China; wenh@mail.ustc.edu.cn

More information

Fuzzy Limits of Functions

Fuzzy Limits of Functions Fuzzy Limits of Functions Mark Burgin Department of Mathematics University of California, Los Angeles 405 Hilgard Ave. Los Angeles, CA 90095 Abstract The goal of this work is to introduce and study fuzzy

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS

THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS JOSEPH S. MILLER Abstract. We call A weakly low for K if there is a c such that K A (σ) K(σ) c for infinitely many σ; in other words, there are

More information

A Generalized Characterization of Algorithmic Probability

A Generalized Characterization of Algorithmic Probability DOI 10.1007/s00224-017-9774-9 A Generalized Characterization of Algorithmic Probability Tom F. Sterkenburg 1,2 The Author(s) 2017. This article is an open access publication Abstract An a priori semimeasure

More information

2 Plain Kolmogorov Complexity

2 Plain Kolmogorov Complexity 2 Plain Kolmogorov Complexity In this section, we introduce plain Kolmogorov Complexity, prove the invariance theorem - that is, the complexity of a string does not depend crucially on the particular model

More information

Measuring Agent Intelligence via Hierarchies of Environments

Measuring Agent Intelligence via Hierarchies of Environments Measuring Agent Intelligence via Hierarchies of Environments Bill Hibbard SSEC, University of Wisconsin-Madison, 1225 W. Dayton St., Madison, WI 53706, USA test@ssec.wisc.edu Abstract. Under Legg s and

More information

Distinguishing two probability ensembles with one sample from each ensemble

Distinguishing two probability ensembles with one sample from each ensemble Distinguishing two probability ensembles with one sample from each ensemble Luís Antunes 1,2 Harry Buhrman 3,4 Armando Matos 2,5 André Souto 6,7 Andreia Teixeira 2,8 1 CRACS & INESC-TEC 2 DCC - Faculdade

More information

HAMILTONIAN ELLIPTIC DYNAMICS ON SYMPLECTIC 4-MANIFOLDS

HAMILTONIAN ELLIPTIC DYNAMICS ON SYMPLECTIC 4-MANIFOLDS HAMILTONIAN ELLIPTIC DYNAMICS ON SYMPLECTIC 4-MANIFOLDS MÁRIO BESSA AND JOÃO LOPES DIAS Abstract. We consider C 2 Hamiltonian functions on compact 4-dimensional symplectic manifolds to study elliptic dynamics

More information

Randomness, Computability, and Density

Randomness, Computability, and Density Randomness, Computability, and Density Rod G. Downey 1 Denis R. Hirschfeldt 2 André Nies 2 1 School of Mathematical and Computing Sciences, Victoria University of Wellington 2 Department of Mathematics,

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

Pseudorandom Generators

Pseudorandom Generators 8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,

More information

Distribution of Environments in Formal Measures of Intelligence: Extended Version

Distribution of Environments in Formal Measures of Intelligence: Extended Version Distribution of Environments in Formal Measures of Intelligence: Extended Version Bill Hibbard December 2008 Abstract This paper shows that a constraint on universal Turing machines is necessary for Legg's

More information

Bias and No Free Lunch in Formal Measures of Intelligence

Bias and No Free Lunch in Formal Measures of Intelligence Journal of Artificial General Intelligence 1 (2009) 54-61 Submitted 2009-03-14; Revised 2009-09-25 Bias and No Free Lunch in Formal Measures of Intelligence Bill Hibbard University of Wisconsin Madison

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

Journal of Computer and System Sciences

Journal of Computer and System Sciences Journal of Computer and System Sciences 77 (2011) 738 742 Contents lists available at ScienceDirect Journal of Computer and System Sciences www.elsevier.com/locate/jcss Nonapproximability of the normalized

More information

General Formula for the Efficiency of Quantum-Mechanical Analog of the Carnot Engine

General Formula for the Efficiency of Quantum-Mechanical Analog of the Carnot Engine Entropy 2013, 15, 1408-1415; doi:103390/e15041408 Article OPEN ACCESS entropy ISSN 1099-4300 wwwmdpicom/journal/entropy General Formula for the Efficiency of Quantum-Mechanical Analog of the Carnot Engine

More information

Algorithmic Probability

Algorithmic Probability Algorithmic Probability From Scholarpedia From Scholarpedia, the free peer-reviewed encyclopedia p.19046 Curator: Marcus Hutter, Australian National University Curator: Shane Legg, Dalle Molle Institute

More information

SINCE algorithmic probability has first been studied in the

SINCE algorithmic probability has first been studied in the STATIONARY ALGORITHMI PROBABILITY (FEBRUARY 4, 29) Stationary Algorithmic Probability Markus Müller arxiv:cs/6895v5 [cs.it] 4 Feb 29 Abstract Kolmogorov compleity and algorithmic probability are defined

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Measures of relative complexity

Measures of relative complexity Measures of relative complexity George Barmpalias Institute of Software Chinese Academy of Sciences and Visiting Fellow at the Isaac Newton Institute for the Mathematical Sciences Newton Institute, January

More information

Section 11.1 Sequences

Section 11.1 Sequences Math 152 c Lynch 1 of 8 Section 11.1 Sequences A sequence is a list of numbers written in a definite order: a 1, a 2, a 3,..., a n,... Notation. The sequence {a 1, a 2, a 3,...} can also be written {a

More information

Kolmogorov Complexity and Diophantine Approximation

Kolmogorov Complexity and Diophantine Approximation Kolmogorov Complexity and Diophantine Approximation Jan Reimann Institut für Informatik Universität Heidelberg Kolmogorov Complexity and Diophantine Approximation p. 1/24 Algorithmic Information Theory

More information

An algorithm for the satisfiability problem of formulas in conjunctive normal form

An algorithm for the satisfiability problem of formulas in conjunctive normal form Journal of Algorithms 54 (2005) 40 44 www.elsevier.com/locate/jalgor An algorithm for the satisfiability problem of formulas in conjunctive normal form Rainer Schuler Abt. Theoretische Informatik, Universität

More information

REVIEW OF ESSENTIAL MATH 346 TOPICS

REVIEW OF ESSENTIAL MATH 346 TOPICS REVIEW OF ESSENTIAL MATH 346 TOPICS 1. AXIOMATIC STRUCTURE OF R Doğan Çömez The real number system is a complete ordered field, i.e., it is a set R which is endowed with addition and multiplication operations

More information

Intro to Information Theory

Intro to Information Theory Intro to Information Theory Math Circle February 11, 2018 1. Random variables Let us review discrete random variables and some notation. A random variable X takes value a A with probability P (a) 0. Here

More information

What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos

What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos What are the recursion theoretic properties of a set of axioms? Understanding a paper by William Craig Armando B. Matos armandobcm@yahoo.com February 5, 2014 Abstract This note is for personal use. It

More information

The complexity of recursive constraint satisfaction problems.

The complexity of recursive constraint satisfaction problems. The complexity of recursive constraint satisfaction problems. Victor W. Marek Department of Computer Science University of Kentucky Lexington, KY 40506, USA marek@cs.uky.edu Jeffrey B. Remmel Department

More information

NOTES ON DIOPHANTINE APPROXIMATION

NOTES ON DIOPHANTINE APPROXIMATION NOTES ON DIOPHANTINE APPROXIMATION Jan-Hendrik Evertse January 29, 200 9 p-adic Numbers Literature: N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta-Functions, 2nd edition, Graduate Texts in Mathematics

More information

Information Theory in Intelligent Decision Making

Information Theory in Intelligent Decision Making Information Theory in Intelligent Decision Making Adaptive Systems and Algorithms Research Groups School of Computer Science University of Hertfordshire, United Kingdom June 7, 2015 Information Theory

More information

Algorithmic Strategy Complexity

Algorithmic Strategy Complexity Algorithmic Strategy Complexity Abraham Neyman aneyman@math.huji.ac.il Hebrew University of Jerusalem Jerusalem Israel Algorithmic Strategy Complexity, Northwesten 2003 p.1/52 General Introduction The

More information

algorithms ISSN

algorithms ISSN Algorithms 2010, 3, 260-264; doi:10.3390/a3030260 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Obituary Ray Solomonoff, Founding Father of Algorithmic Information Theory Paul M.B.

More information

Correspondence Principles for Effective Dimensions

Correspondence Principles for Effective Dimensions Correspondence Principles for Effective Dimensions John M. Hitchcock Department of Computer Science Iowa State University Ames, IA 50011 jhitchco@cs.iastate.edu Abstract We show that the classical Hausdorff

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Kraft-Chaitin Inequality Revisited

Kraft-Chaitin Inequality Revisited CDMTCS Research Report Series Kraft-Chaitin Inequality Revisited C. Calude and C. Grozea University of Auckland, New Zealand Bucharest University, Romania CDMTCS-013 April 1996 Centre for Discrete Mathematics

More information

Homework 4, 5, 6 Solutions. > 0, and so a n 0 = n + 1 n = ( n+1 n)( n+1+ n) 1 if n is odd 1/n if n is even diverges.

Homework 4, 5, 6 Solutions. > 0, and so a n 0 = n + 1 n = ( n+1 n)( n+1+ n) 1 if n is odd 1/n if n is even diverges. 2..2(a) lim a n = 0. Homework 4, 5, 6 Solutions Proof. Let ɛ > 0. Then for n n = 2+ 2ɛ we have 2n 3 4+ ɛ 3 > ɛ > 0, so 0 < 2n 3 < ɛ, and thus a n 0 = 2n 3 < ɛ. 2..2(g) lim ( n + n) = 0. Proof. Let ɛ >

More information

Math 104: Homework 7 solutions

Math 104: Homework 7 solutions Math 04: Homework 7 solutions. (a) The derivative of f () = is f () = 2 which is unbounded as 0. Since f () is continuous on [0, ], it is uniformly continous on this interval by Theorem 9.2. Hence for

More information

ECE 587 / STA 563: Lecture 5 Lossless Compression

ECE 587 / STA 563: Lecture 5 Lossless Compression ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 2017 Author: Galen Reeves Last Modified: October 18, 2017 Outline of lecture: 5.1 Introduction to Lossless Source

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

ECE 587 / STA 563: Lecture 5 Lossless Compression

ECE 587 / STA 563: Lecture 5 Lossless Compression ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 28 Author: Galen Reeves Last Modified: September 27, 28 Outline of lecture: 5. Introduction to Lossless Source

More information

New universal Lyapunov functions for nonlinear kinetics

New universal Lyapunov functions for nonlinear kinetics New universal Lyapunov functions for nonlinear kinetics Department of Mathematics University of Leicester, UK July 21, 2014, Leicester Outline 1 2 Outline 1 2 Boltzmann Gibbs-Shannon relative entropy (1872-1948)

More information

Hausdorff Measures and Perfect Subsets How random are sequences of positive dimension?

Hausdorff Measures and Perfect Subsets How random are sequences of positive dimension? Hausdorff Measures and Perfect Subsets How random are sequences of positive dimension? Jan Reimann Institut für Informatik, Universität Heidelberg Hausdorff Measures and Perfect Subsets p.1/18 Hausdorff

More information

CHAPTER 8: EXPLORING R

CHAPTER 8: EXPLORING R CHAPTER 8: EXPLORING R LECTURE NOTES FOR MATH 378 (CSUSM, SPRING 2009). WAYNE AITKEN In the previous chapter we discussed the need for a complete ordered field. The field Q is not complete, so we constructed

More information

Kolmogorov complexity ; induction, prediction and compression

Kolmogorov complexity ; induction, prediction and compression Kolmogorov complexity ; induction, prediction and compression Contents 1 Motivation for Kolmogorov complexity 1 2 Formal Definition 2 3 Trying to compute Kolmogorov complexity 3 4 Standard upper bounds

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

A Mathematical Theory of Communication

A Mathematical Theory of Communication A Mathematical Theory of Communication Ben Eggers Abstract This paper defines information-theoretic entropy and proves some elementary results about it. Notably, we prove that given a few basic assumptions

More information

Randomness Beyond Lebesgue Measure

Randomness Beyond Lebesgue Measure Randomness Beyond Lebesgue Measure Jan Reimann Department of Mathematics University of California, Berkeley November 16, 2006 Measures on Cantor Space Outer measures from premeasures Approximate sets from

More information

which possibility holds in an ensemble with a given a priori probability p k log 2

which possibility holds in an ensemble with a given a priori probability p k log 2 ALGORITHMIC INFORMATION THEORY Encyclopedia of Statistical Sciences, Volume 1, Wiley, New York, 1982, pp. 38{41 The Shannon entropy* concept of classical information theory* [9] is an ensemble notion it

More information

Time-Bounded Kolmogorov Complexity and Solovay Functions

Time-Bounded Kolmogorov Complexity and Solovay Functions Time-Bounded Kolmogorov Complexity and Solovay Functions Rupert Hölzl, Thorsten Kräling, and Wolfgang Merkle Institut für Informatik, Ruprecht-Karls-Universität, Heidelberg, Germany Abstract. A Solovay

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. (a) (10 points) State the formal definition of a Cauchy sequence of real numbers. A sequence, {a n } n N, of real numbers, is Cauchy if and only if for every ɛ > 0,

More information

On Generalized Entropy Measures and Non-extensive Statistical Mechanics

On Generalized Entropy Measures and Non-extensive Statistical Mechanics First Prev Next Last On Generalized Entropy Measures and Non-extensive Statistical Mechanics A. M. MATHAI [Emeritus Professor of Mathematics and Statistics, McGill University, Canada, and Director, Centre

More information

Induction, sequences, limits and continuity

Induction, sequences, limits and continuity Induction, sequences, limits and continuity Material covered: eclass notes on induction, Chapter 11, Section 1 and Chapter 2, Sections 2.2-2.5 Induction Principle of mathematical induction: Let P(n) be

More information

Chapter 6 The Structural Risk Minimization Principle

Chapter 6 The Structural Risk Minimization Principle Chapter 6 The Structural Risk Minimization Principle Junping Zhang jpzhang@fudan.edu.cn Intelligent Information Processing Laboratory, Fudan University March 23, 2004 Objectives Structural risk minimization

More information

On the average-case complexity of Shellsort

On the average-case complexity of Shellsort Received: 16 February 2015 Revised: 24 November 2016 Accepted: 1 February 2017 DOI: 10.1002/rsa.20737 RESEARCH ARTICLE On the average-case complexity of Shellsort Paul Vitányi 1,2 1 CWI, Science Park 123,

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. Determine whether the following statements are true or false. Justify your answer (i.e., prove the claim, derive a contradiction or give a counter-example). (a) (10

More information

The Riemann Hypothesis. Cristian Dumitrescu

The Riemann Hypothesis. Cristian Dumitrescu The Riemann Hypothesis Cristian Dumitrescu Abstract. In this article I describe a proof of the fact that ZFC cannot say much about a Turing machine that takes a very long time to halt (if it eventually

More information

Kolmogorov Complexity

Kolmogorov Complexity Kolmogorov Complexity 1 Krzysztof Zawada University of Illinois at Chicago E-mail: kzawada@uic.edu Abstract Information theory is a branch of mathematics that attempts to quantify information. To quantify

More information

Randomness and Halting Probabilities

Randomness and Halting Probabilities Randomness and Halting Probabilities Verónica Becher vbecher@dc.uba.ar Santiago Figueira sfigueir@dc.uba.ar Joseph S. Miller joseph.miller@math.uconn.edu February 15, 2006 Serge Grigorieff seg@liafa.jussieu.fr

More information

A New Lower Bound Technique for Quantum Circuits without Ancillæ

A New Lower Bound Technique for Quantum Circuits without Ancillæ A New Lower Bound Technique for Quantum Circuits without Ancillæ Debajyoti Bera Abstract We present a technique to derive depth lower bounds for quantum circuits. The technique is based on the observation

More information

Existence of Ulam Stability for Iterative Fractional Differential Equations Based on Fractional Entropy

Existence of Ulam Stability for Iterative Fractional Differential Equations Based on Fractional Entropy Entropy 215, 17, 3172-3181; doi:1.339/e1753172 OPEN ACCESS entropy ISSN 199-43 www.mdpi.com/journal/entropy Article Existence of Ulam Stability for Iterative Fractional Differential Equations Based on

More information

10-704: Information Processing and Learning Fall Lecture 10: Oct 3

10-704: Information Processing and Learning Fall Lecture 10: Oct 3 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 0: Oct 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

Monotonically Computable Real Numbers

Monotonically Computable Real Numbers Monotonically Computable Real Numbers Robert Rettinger a, Xizhong Zheng b,, Romain Gengler b, Burchard von Braunmühl b a Theoretische Informatik II, FernUniversität Hagen, 58084 Hagen, Germany b Theoretische

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information

Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information Entropy and Ergodic Theory Lecture 4: Conditional entropy and mutual information 1 Conditional entropy Let (Ω, F, P) be a probability space, let X be a RV taking values in some finite set A. In this lecture

More information

Algorithmic Information Theory

Algorithmic Information Theory Algorithmic Information Theory [ a brief non-technical guide to the field ] Marcus Hutter RSISE @ ANU and SML @ NICTA Canberra, ACT, 0200, Australia marcus@hutter1.net www.hutter1.net March 2007 Abstract

More information

Chapter 2 Review of Classical Information Theory

Chapter 2 Review of Classical Information Theory Chapter 2 Review of Classical Information Theory Abstract This chapter presents a review of the classical information theory which plays a crucial role in this thesis. We introduce the various types of

More information

CS154, Lecture 12: Kolmogorov Complexity: A Universal Theory of Data Compression

CS154, Lecture 12: Kolmogorov Complexity: A Universal Theory of Data Compression CS154, Lecture 12: Kolmogorov Complexity: A Universal Theory of Data Compression Rosencrantz & Guildenstern Are Dead (Tom Stoppard) Rigged Lottery? And the winning numbers are: 1, 2, 3, 4, 5, 6 But is

More information