How much randomness is needed for statistics?

Size: px
Start display at page:

Download "How much randomness is needed for statistics?"

Transcription

1 How much randomness is needed for statistics? Bjørn Kjos-Hanssen a, Antoine Taveneaux b, Neil Thapen c a University of Hawai i at Mānoa, Honolulu, HI 96822, USA, b LIAFA, Université Paris Diderot - Paris 7, Paris Cedex 13, France, c Academy of Sciences of the Czech Republic, Praha 1, Czech Republic, Abstract In algorithmic randomness, when one wants to define a randomness notion with respect to some non-computable measure λ, a choice needs to be made. One approach is to allow randomness tests to access the measure λ as an oracle (which we call the classical approach ). The other approach is the opposite one, where the randomness tests are completely effective and do not have access to the information contained in λ (we call this approach Hippocratic ). While the Hippocratic approach is in general much more restrictive, there are cases where the two coincide. The first author showed in 2010 that in the particular case where the notion of randomness considered is Martin-Löf randomness and the measure λ is a Bernoulli measure, classical randomness and Hippocratic randomness coincide. In this paper, we prove that this result no longer holds for other notions of randomness, namely computable randomness and stochasticity. Keywords: Hippocratic randomness, martingales, Bernoulli measures 1. Introduction In algorithmic randomness theory we are interested in which almost sure properties of an infinite sequence of bits are effective or computable in some sense. Martin-Löf defined randomness with respect to the uniform fair-coin measure µ on 2 ω as follows. A sequence X 2 ω is Martin-Löf random if we have X n N U n for every sequence of uniformly Σ 0 1 (or effectively open) subsets of 2 ω such that µ(u n ) 2 n. Now if we wish to consider Martin-Löf randomness for a Bernoulli measure µ p (that is, a measure such that the i th bit is the result of a Bernoulli trial with parameter p), we have have two possible ways to extend the previous definition. addresses: bjoernkh@hawaii.edu ( Bjørn Kjos-Hanssen), taveneaux@calculabilite.fr ( Antoine Taveneaux ), thapen@math.cas.cz ( Neil Thapen ) Preprint submitted to Elsevier November 1, 2012

2 The first option is to consider p as an oracle (with an oracle p we can compute µ p ) and relativize everything to this oracle. Then X is µ p -Martin- Löf random if for every sequence (U n ) n N of uniformly Σ 0 1[p] sets such that µ p (U n ) 2 n we have X n N U n. We will call this approach the classical notion of Martin-Löf randomness relative to µ p. Another option is to keep the measure µ p hidden from the process which describes the sequence (U n ). One can merely replace µ by µ p in Martin-Löf s definition but still require (U n ) to be uniformly Σ 0 1 in the unrelativized sense. This notion of randomness was introduced by Kjos-Hanssen [K10] who called it Hippocratic randomness; Bienvenu, Doty and Stephan [BDS09] used the term blind randomness. Kjos-Hanssen showed that for Bernoulli measures, Hippocratic and classical randomness coincide in the case of Martin-Löf randomness. Bienvenu, Gács, Hoyrup, Rojas and Shen [BGHRS11] extended Kjos-Hanssen s result to other classes of measures. Here we go in a different direction and consider weaker randomness notions, such as computable randomness and stochasticity. We discover the contours of a dividing line for the type of betting strategy that is needed in order to render the probability distribution superfluous as a computational resource. We view statistics as the discipline concerned with determining the underlying probability distribution µ p by looking at the bits of a random sequence. In the case of Martin-Löf randomness it is possible to determine p ([K10]), and therefore Hippocratic randomness and classical randomness coincide. In this sense, Martin-Löf randomness is sufficient for statistics to be possible, and it is natural to ask whether smaller amounts of randomness, such as computable randomness, are also sufficient. An earlier version of this paper appeared in the proceedings of the Computability in Europe 2012 conference [KTT12]. Notation.. Our notation generally follows Nies monograph [Nie09]. We write 2 n for {0, 1} n, and for sequences σ 2 ω we will also use σ to denote the real with binary expansion 0.σ, that is, the real i=1 σ(i)2 i. We use ε to denote the empty word, σ(n) for the n th element of a sequence and σ n for the sequence formed by the first n elements. For sequences ρ, σ we write σ ρ if σ is a proper prefix of ρ and denote the concatenation of σ and ρ by σ.ρ or simply σρ. Throughout the paper we set n = n(n 1)/ Hippocratic martingales Formally a martingale is a function M : 2 <ω R 0 satisfying M(σ) = M(σ0) + M(σ1). 2 Intuitively, such a function arises from a betting strategy for a fair game played with an unbiased coin (a sequence of Bernoulli trials with parameter 1/2). In each round of the game we can choose our stake, that is, how much of our capital 2

3 we will bet, and whether we bet on heads (1) or tails (0). A coin is tossed, and if we bet correctly we win back twice our stake. Suppose that our betting strategy is given by some fixed function S of the history σ of the game up to that point. Then it is easy to see that the function M(σ) giving our capital after a play σ satisfies the above equation. On the other hand, from any M satisfying the equation we can recover a corresponding strategy S. More generally, consider a biased coin which comes up heads with probability p (0, 1). In a fair game played with this coin, we would expect to win back 1/p times our stake if we bet correctly on heads, and 1/(1 p) times our stake if we bet correctly on tails. Hence we define a p-martingale to be a function satisfying M(σ) = pm(σ1) + (1 p)m(σ0). We can generalize this further, and for any probability measure µ on 2 ω define a µ-martingale to be a function satisfying µ(σ)m(σ) = µ(σ1)m(σ1) + µ(σ0)m(σ0). For the Bernoulli measure with parameter p, we say that a sequence X 2 ω is p-computably random if for every total, p-computable p-martingale M, the sequence (M(X n)) n is bounded. This is the classical approach to p-computable randomness. Under the Hippocratic approach, the bits of the parameter p should not be available as a computational resource. The obvious change to the definition would be to restrict to p-martingales M that are computable without an oracle for p. However this does not give a useful definition, as p can easily be recovered from any non-trivial p-martingale. Instead we will define p-hippocratic computable martingales in terms of their stake function (or strategy) S. We formalize S as a function 2 <ω [ 1, 1] Q. The absolute value S(σ) gives the fraction of our capital we put up as our stake, and we bet on 1 if S(σ) 0 and on 0 if S(σ) < 0. Given α (0, 1), the α-martingale M α arising from S is then defined inductively by M α (ε) = 1 ( M α (σ1) = M α (σ) 1 S(σ) + S(σ) ) α 1 {S(σ) 0} ( M α (σ0) = M α (σ) 1 S(σ) + S(σ) ) 1 α 1 {S(σ)<0} where, for a formula T, we use the notation 1 {T } to mean the function which takes the value 1 if T is true and 0 if T is false. We define a p-hippocratic computable martingale to be a p-martingale M p arising from some total computable (without access to p) stake function S. We say that a sequence X 2 ω is p-hippocratic computably random if for every p-hippocratic computable martingale M, the sequence (M(X n)) n is bounded. In Section 2 below we show that for all p MLR the set of p-hippocratic computably random sequences is strictly bigger than the set of p-computable 3

4 random sequences. More precisely, we show that we can compute a sequence Q 2 ω from p such that Q is p-hippocratic computably random. In a nutshell, the proof works as follows. We use the number p in two ways. To compute the i th bit of Q, the first i bits of p are treated as a parameter r = 0.p 0... p i, and we pick the i th bit of Q to look like it has been chosen at random in a Bernoulli trial with bias r. To do this, we use some fresh bits of p (which have not been used so far in the construction of Q) and compare them to r, to simulate the trial. Since these bits of p were never used before, if we know only the first i 1 bits of Q they appear random, and thus the i th bit of Q indeed appears to be chosen at random with bias r. Since r = 0.p 1 p 2... p i converges quickly to p (this convergence is faster than the deviations created by statistical noise in a real sequence of Bernoulli trials with parameter p), we are able to show that Q overall looks p-random as long as we do not have access to p, in other words, that Q is p-hippocratic computably random Hippocratic stochasticity and KL randomness In Section 3 we consider another approach to algorithmic randomness, known as stochasticity. It is reasonable to require that a random sequence satisfies the law of large numbers, that is, that the proportion of 1s in the sequence converges to the bias p. But, for an unbiased coin, the string satisfies this law but is clearly not random. Following this idea, we say that a sequence X is p-kolmogorov-loveland (or p-kl) stochastic if there is no p- computable way to select infinitely many bits from X, where we are not allowed to know the value of a bit before we select it, without the selected sequence satisfying the law of large numbers (see Definition 1 for a formal approach). For this paradigm the Hippocratic approach is clear: we consider only selection functions which are computable without an oracle for p. We show in Theorem 7 that for p 0 2 MLR there exists a sequence Q which is p-hippocratic KL stochastic but not p-kl stochastic. Again we use p as a random bit generator and create a sequence Q that appears random for a sequence of Bernoulli trials, where the bias of the i th trial is q i for a certain sequence (q i ) i converging to p. Intuitively, the convergence is so slow that it is impossible to do (computable) statistics with Q to recover p, and we are able to show that without access to p the sequence Q is p-kl stochastic. At the end of Section 3 we consider another notion, Kolmogorov-Loveland randomness. We give a simple argument to show that if we can compute p from every p-hippocratic KL random sequence, then the p-hippocratic KL random sequences and the p-kl random sequences are the same (and vice versa). 2. Computable randomness In this section we show that for any Martin-Löf random bias p, p-computable randomness is a stronger notion than p-hippocratic computable randomness. 4

5 Theorem 1. Let α MLR. There exists a sequence Q 2 ω, computable in polynomial time from α, such that Q is α-hippocratic computably random. Before giving the proof, we remark that a stronger version of the theorem is true: the sequence Q is in fact α-hippocratic partial computably random (meaning that we allow the martingale to be a partial computable function, see Definition in [DH10]). Also, a sceptic could (reasonably) complain that it is not really natural for us to make bets without any idea about our current capital. However if we add an oracle to give the integer part of our capital at each step (or even an approximation with accuracy 2 n when we bet on the n th bit), Theorem 1 remains true and the proof is the same. In the same spirit we could object that it is more natural to have a stake function giving the amount of our bet (to be placed only if we have a capital large enough) and not the proportion of our capital. For this definition of a Hippocratic computable martingale, similarly the theorem remains true and the proof is the same. Proof. Let α MLR. Then α is not rational and cannot be represented by a finite binary sequence and we can suppose that 0 < α < 1/2. Recall that n = n(n 1)/2 and that we freely identify a sequence X (finite or infinite) with the real number with the binary expansion 0.X. The proof has the following structure. First, we describe an algorithm to compute a sequence Q from α. To compute each bit Q n of Q we will use a finite initial segment of α as an approximation of α, and we will compare this with some other fresh bits of α which we treat as though they are produced by a random bit generator. In this way Q n will approximate the outcome of a Bernoulli trial with bias α. Second, we will suppose for a contradiction that there is an α-hippocratic computable martingale (that is, a martingale that arises from a stake function computable without α) such that the capital of this martingale is not bounded on Q. We will show that we can use this stake function to construct a Martin Löf test (U n ) n such that α does not pass this test. So let Q = Q 1 Q 2... be defined by the condition that: { 0 if 0.αn Q n = α n +n 0.α 1... α n, 1 otherwise. We can compute Q in polynomial time from α, as we can compute each bit Q n in time O(n 2 ). Now let S : 2 <ω Q [ 1, 1] be a computable stake function. We will write M X for the X-martingale arising from S. Suppose for a contradiction that lim sup M α (Q n) =. n Our goal is to use this to define a Martin-Löf test which α fails. The classical argument (see Theorem in [DH10]) would be to consider the sequence of sets V j = {X 2 ω n M α (X n) > 2 j }, 5

6 but without oracle access to α this is not Σ 0 1, and does not define a Martin-Löf test. However it turns out that we can use a similar sequence of sets, based on the idea that, although we cannot compute M α precisely, we can approximate it using the approximation α 1... α n of α. For this we will use the following lemma. The proof is rather technical and we postpone it to later in this section. Lemma 2. For α MLR, there exists m N such that if σ (α m ) and τ (α m ) then for all η 2 <ω and all n m we have: if 0 < τ σ < 2 n and η n + 1 then M σ (η) M τ (η) 2 n. Let m be given by Lemma 2 and let ρ be α m. Without loss of generality we may assume 2 m < ρ. Let Γ : 2 ω 2 ω be the operator which converts α 1... α n into Q 1... Q n. That is, Γ(α 1... α k ) = Q 1... Q n where n is the biggest integer such that n k. This notation naturally extends to infinite sequences so we may write Γ(α) = Q. We consider the uniform sequence of Σ 0 1 sets U j = {X 1... X k ρ X 1... X k and M X1...X k (Γ(X 1... X k )) > 2 j }. We let U j denote the set of infinite sequences with a prefix in U j. By Lemma 2, M α (Γ(α 1... α k )) M α1...α k (Γ(α 1... α k )) < 2 k 1 for all sufficiently large k. Since M α increases unboundedly on Q = Γ(α) it follows that α U j for all j. To show that (U j ) is a Martin-Löf test, it remains to show that the measure of U j is small. Since σ M σ (σ) is almost a α-martingale, where σ runs over the prefixes of α, we will use a lemma similar to the Kolmogorov inequality (see Theorem in [DH10]). Again we postpone the proof to later in this section. Lemma 3. For any number n m, any extension σ ρ of length n and any prefix-free set Z k N {0, 1}k of extensions of σ, we have 2 τ M τ (Γ(τ)) 2 σ e 2 [1 + M σ (Γ(σ))]. τ Z Now fix j and let W j be a prefix-free subset of U j with the property that the set of infinite sequences with a prefix in W j is exactly U j. Then by the definition of U j, if τ W j then M τ (Γ(τ)) 2 j. Hence by Lemma 3 we have: µ(u j ) = τ W j 2 τ τ W j Mτ (Γ(τ)) 2 j 2 τ 2 ρ e 2 (1 + M ρ (Γ(ρ))) 2 j. Since 2 ρ (1 + M ρ (Γ(ρ))) is constant, this shows that (U j ) is a Martin-Löf test. As α j U j it follows that α MLR. This is a contradiction. 6

7 Notice that this proof makes use of the fact that in our betting strategy we have to proceed monotonically from left to right through the string, making a decision for each bit in turn as we come to it. This is why our construction is able to use α as a random bit generator, because at each step it can use bits that were not used to compute the previous bits of Q. Following this idea the question naturally arises: if we are allowed to use a non-monotone strategy, then are the classical and Hippocratic random sequences the same? We explore this question in Section 3. We now return to the postponed proofs. We will need a couple of technical lemmas, the first one giving, roughly speaking, a modulus of continuity for the map (α, X) M α (X). Lemma 4. Let ɛ > 0. Then there exists r N such that for all sufficiently large k, for all α, β 2 ω with ɛ < α < β < 1 ɛ and for all non-empty σ 2 <ω, 0 < β α < 2 k M α (σ) M β (σ) < 2 k+r σ. Proof. Since 0 < ɛ < α < β < α + 2 k we have and hence 1 α + 2 k < 1 β < 1 α < 1 ɛ 0 < 1 α 1 β < 1 α 1 α + 2 k = 2 k α(α + 2 k ) < 2 k α 2 < 2 k ɛ 2. It follows, since S(X) is less than or equal to 1, that ( 0 1 S(σ) + S(σ) α 1 {S(σ) 0} ) ( 1 S(σ) + S(σ) β 1 {S(σ) 0} ) 2 k ɛ 2 and symmetrically, since 0 < ɛ < 1 β < 1 α < (1 β) + 2 k, also that ( 0 1 S(σ) + S(σ) ) ( 1 β 1 {S(σ)<0} 1 S(σ) + S(σ) ) 1 α 1 {S(σ)<0} 2 k ɛ 2. M X (σ i) M X (σ (i 1)) Hence if we write Ri X for (with the convention 0/0 = 0) we have for all i σ that Ri α Rβ i < 2 k ɛ 2. Furthermore, take s to be a positive integer such that 2 s > 1 + 1/ɛ. Then we know that Ri α and R β i are both always smaller than 2 s. We can now bound M α (σ) M β (σ). Consider the case when 7

8 M α (σ) M β (σ) (the other case is symmetrical). Then, writing n for σ, M α (σ) M β (σ) = n n Ri α i=1 i=1 i=1 R β i n n (R β i + 2 k ɛ 2 ) [ n = R β i + i=1 Z [1,n] Z <n 2 n (2 k ɛ 2 )(2 s ) n, i=1 R β i (2 k ɛ 2 ) n Z i Z R β i where for the last inequality we are assuming that k is large enough that 2 k ɛ 2 < 1. The result follows. Lemma 5. For s R, s > 0 we have n=1 (1 + s2 n ) < e s. Proof. It is enough to show that ( 2 n ) + s ln = [ln(2 n + s) ln(2 n )] < s. n=1 2 n n=1 The derivative of ln is the decreasing function x 1/x so by the mean value theorem we have that ln(2 n + s) ln(2 n ) < s/2 n, which gives the inequality. We are now able to prove the lemmas used in the proof of Theorem 1. Restatement of Lemma 2. For α MLR, there exists m N such that if σ (α m ) and τ (α m ) then for all η 2 <ω and all n m we have: if 0 < τ σ < 2 n and η n + 1 then M σ (η) M τ (η) 2 n. ] n i=1 R β i Proof. Since α MLR we can find m 0 N and ɛ > 0 such that ɛ < α m 0 < (α m 0)+2 m 0 < 1 ɛ. Let τ, σ, η and n satisfy the assumptions of the lemma, using m 0 in place of m. We must have ɛ < σ < τ < 1 ɛ, hence by Lemma 4 there exists r depending only on ɛ such that M σ (η) M τ (η) 2 n +r φ 2 n +r(n+1). It is clear that we can find m 1 N such that, for n m 1, r(n + 1) n = r(n + 1) For the lemma take m = max(m 0, m 1 ). n(n 1) 2 < n. 8

9 Now we suppose α MLR, α < 1/2 and let m be as given by Lemma 2. We write ρ for α m. Without loss of generality we may assume 2 m < ρ. Restatement of Lemma 3. For any number n m, any extension σ ρ of length n and any prefix-free set Z k N {0, 1}k of extensions of σ, we have 2 τ M τ (Γ(τ)) 2 σ e 2 [1 + M σ (Γ(σ))]. τ Z Proof. It is enough to show this for every finite Z. We will use induction on the size p of Z, with our inductive hypothesis that for all n m, all extensions σ ρ of length n and all suitable sets Z of size p, [ 2 τ M τ (Γ(τ)) 2 σ 2e 2 2 i + M σ ( (Γ(σ)) i )]. τ Z i=n Note that by Lemma 5 the right hand side is bounded by 2 σ e 2 [1+M σ (Γ(σ))] (as long as n 2). The base case Z = 0 is trivial. Now suppose that the hypothesis is true for all sets of size less than or equal to p and suppose that Z = p + 1. Let ν be the longest extension of σ which has length of the form k for some k N and which is such that all strings in Z are extensions of ν. Then for each string θ of length k there are fewer than p + 1 strings in Z beginning with νθ. Recall that νθ = k + k = (k + 1) and that Γ(ν) and Γ(νθ) are strings of length respectively k and k + 1. Applying the inductive hypothesis, we have 2 τ M τ (Γ(τ)) 2 τ M τ (Γ(τ)) τ Z θ {0,1} k τ Z τ νθ θ {0,1} k 2 νθ θ {0,1} k 2 νθ i=n [ 2e 2 2 i + M νθ ( (Γ(νθ)) i )] i=k+1 i=k+1 [ 2e 2 2 i + e 2 2 k + M ν ( (Γ(νθ)) i )] i=k+1 i=k+1 where for the last inequality we are using that, by Lemma 2, M νθ (Γ(νθ)) M ν (Γ(νθ)) + 2 k. Rearranging the last line, we get 2 ν 2e 2 2 i + e 2 2 k + 2 k M ν ( (Γ(νθ)) i ). i=k+1 θ {0,1} k i=k+1 Now we will find an upper bound for the term in round brackets. We will write ν for ν k and S for S(Γ(ν)). By the definition of Γ, if θ ν (as real numbers) then Γ(νθ) = Γ(ν).1. Hence, by the definition of M and the 9

10 fact that ν ν, summing only over θ {0, 1} k we have θ ν 2 k M ν (Γ(νθ)) = ν M ν (Γ(ν)) ( 1 S + 1 ν S 1 ) {S 0} M ν (Γ(ν)) ( ν ( 1 S ) + S 1 {S 0} ). Observing that ν ν + 2 k and 1 ν 1/2 and that hence we similarly get that θ> ν 1 ν 1 ν 1 ν + 2 k 1 ν k, 2 k M ν (Γ(νθ)) = (1 ν)m ν (Γ(ν)) ( 1 S ν S 1 ) {S<0} M ν (Γ(ν)) ( (1 ν) ( 1 S ) + S 1 {S<0} k). Summing these gives 2 k M ν (Γ(νθ)) M ν ( (Γ(ν)) 1 S + S 1{S 0} + S 1 {S<0} k) θ {0,1} k = M ν (Γ(ν))( k ). Combining this with our earlier bound, we now have [ 2 τ M τ (Γ(τ)) 2 ν 2e 2 2 i + e 2 2 k + M ν ( (Γ(ν)) i )]. τ Z i=k+1 Finally, let r = k n so that ν = k = (n + r) n + nr = σ + nr. Recall that 1 ν > ν ρ > 2 n, which means that a ν-martingale can multiply its capital by at most 2 n in one round. Hence, also using Lemma 2, 2 ν M ν (Γ(ν)) 2 σ nr (2 n ) r M ν (Γ(σ)) 2 σ ( 2 n + M σ (Γ(σ)) ). This gives us the bound 2 σ [ i=k+1 which is less than or equal to i=k 2e 2 2 i + e 2 2 k + e 2 2 n + M σ ( (Γ(σ)) i )], i=n i=k [ 2 σ 2e 2 2 i + M σ ( (Γ(σ)) i )]. i=n This completes the induction. 10

11 3. Kolmogorov-Loveland stochasticity and randomness We define Kolmogorov-Loveland stochasticity and show that, in this setting, the Hippocratic and classical approaches give different sets. We also consider whether this is true for Kolmogorov-Loveland randomness, and relate this to a statistical question Definitions For a finite string σ {0, 1} n, we write #0(σ) for {k < n σ(k) = 0} and #1(σ) for {k < n σ(k) = 1}. We write Φ(σ) for #1(σ)/n, the frequency of 1s in σ. Definition 1 (Selection function). A KL selection function is a partial function f : 2 <ω {scan, select} N. We write f(σ) as a pair (s(σ), n(σ)) and in this paper we insist that for all σ and ρ σ we have n(ρ) n(σ), so that each bit is read at most once. Given input X, we write (Vf X ) for the sequence of strings seen (with bits either scanned or selected) by f, so that V X f Vf X (0) = X(n(ε)) (k + 1) = V X f (k).x(n(vf X (k))). We write U X f for the subsequence of bits selected by f. Formally U X f is the limit of the monotone sequence of strings (T X f ) where T X f (0) = ε T X f (k + 1) = { T X f (k) if s(v X f T X f (k).n(v X f (k)) if s(v X f (k)) = scan (k)) = select. Informally, the function is used to select bits from X in a non-monotone way. If V is the string of bits we have read so far, n(v ) gives the location of the next bit of X to be read. Then s(v ) = scan means that we will just read this bit, whereas s(v ) = select means that we will add it to our string T of selected bits. Definition 2 (p-kl stochastic sequence). A sequence X is p-kl stochastic if for all p-computable KL selection functions f (notice that f can be a partial function) such that the limit Uf X of (Tf X ) is infinite, we have lim k Φ(T X f (k)) = p. A sequence X is p-hippocratic KL stochastic if for all KL selection functions f, computable without an oracle p, such that Uf X is infinite, we have lim Φ(T f X (k)) = p. k 11

12 Definition 3 (Generalized Bernoulli measure). A generalized Bernoulli measure λ on 2 ω is determined by a sequence (b λ i ) of real numbers in (0, 1). For each i, the event {X 1 X 2... X i = 1} has probability b λ i, and these events are all independent. In other words, for all finite strings w the set [w] of strings beginning with w has measure λ([w]) = (1 b λ i ). i< w w i=1 b λ i i< w w i=0 We say the measure is computable if the sequence (b λ i ) is uniformly computable. In some sense a generalized Bernoulli measure treats sequences as though they arise from a sequences of independent Bernoulli trials with parameter b λ i for the i th bit. Recall that for a measure λ, a λ-martingale is a function M : 2 ω R satisfying λ(σ)m(σ) = λ(σ1)m(σ1) + λ(σ0)m(σ0). We now define the notion of a KL martingale, which will be able to select which bit it will bet on next, in a generalized Bernoulli measure. We use the notation from Definition 1. Definition 4 (λ-kl randomness ). Let λ be a generalized Bernoulli measure. A λ-kl martingale is a pair (f, M) where f is a selection function (δ, n) and M is a function 2 <ω R such that, for every sequence X 2 ω for which f select infinitely many bits of X, M ( T X f (k) ) = b λ i M ( T X f (k).1 ) + (1 b λ i )M ( T X f (k).0 ) for all k N, where i = n(vf X(k)). We say that X is λ-kl random if, for every λ-kl martingale computable with oracle (b λ i ), the sequence (M(T f X (k))) is bounded. For a sequence Y, we say that X is λ-kl Y random if this is true even when the λ-kl martingale is also given oracle access to Y Hippocratic stochasticity is not stochasticity We will show that, despite the fact that we now allow non-monotone strategies, once again there exist sequences computable from α which are α-hippocratic KL stochastic, for α MLR 0 2 (recall that Chaitin s constant Ω is the prototypical example of such an α). We remark that our proof shows also that for α MLR 0 2 the Hippocratic and classical versions of MWC-stochasticity are different (see Definition in [DH10] for a formal definition). We first need a lemma: Lemma 6 ([MSU98] and in [DH10] p.311). If X is Martin-Löf random for a computable generalized Bernoulli measure λ, then X is λ-kl random. 12

13 Proof (see [DH10]). Consider the set of sequences in which the player achieves capital greater than j when he started with capital 1. For obvious reasons, this is an effective open set of measure less than 1/j. Theorem 7. Let α MLR 0 2. There exists a sequence Q 2 ω, computable from α, such that Q is α-hippocratic KL stochastic. Proof. We will first define the sequence Q, and then show that Q is λ-kl random for a certain generalized Bernoulli measure λ for which the parameters (b λ i ) converge to α. Finally we will show that it follows that Q is actually α-hippocratic KL stochastic. Since α 2 0, by Shoenfield s limit lemma α is the limit of a computable sequence of real numbers (although the convergence must be extremely slow, since α is not computable). In particular there exists a computable sequence of finite strings (β k ) such that β k {0, 1} k and We define Q k by lim k βk = α. { 1 if 0.β Q k = k 0.α k α k +k 0 otherwise. We set Q = Q 1 Q Intuitively, as in the proof of Theorem 1, we are using α as a random bit generator to simulate a sequence of Bernoulli trials with parameter β k. Notice that the transformation mapping α to Q is a total computable function. We know that in general if g is total computable, and X is (Martin-Löf) random for the uniform measure µ, then g(x) is random for the measure µ g 1 (see [S86] for a proof of this fact). Since α MLR, in our case this tell us that Q is random for exactly the generalized Bernoulli measure λ given by b λ i = βi. It follows from Lemma 6 that Q is λ-kl random. Finally by Lemma 8 below we can conclude that Q is α-hippocratic KL stochastic, completing the argument. Lemma 8. Let λ be a computable generalized Bernoulli measure and suppose lim i bλ i = p. Then every λ-kl random sequence is p-hippocratic KL stochastic. Proof. We prove the contrapositive. Without loss of generality we assume that 0 < p < 1/2. Suppose that the sequence X is not p-hippocratic KL stochastic. Then there is a selection function f, computed without an oracle for p, for which the selected sequence Uf X is infinite and Φ(Tf X (k)) does not tend to the limit p. We will define a λ-kl martingale which wins on X. 13

14 Without loss of generality, there is a rational number τ > 0 such that p + 2τ < 1 and lim sup Φ(Tf A (k)) p + 2τ. k Since (b λ k ) converges to p, by changing the selection function if necessary, we may assume without loss of generality that b λ k < p + τ for all locations k read by the selection function. We let γ = p + 2τ p + τ 1 > 0. We let δ be a rational in (0, 1) satisfying both δ (1 p τ)2 (p + τ) 2 τ and log(1 δ) > δ (1 + γ/2) ln 2 (where log is to base 2). Such a δ exists because log(1 δ)/δ converges to 1/ ln 2 as δ > 0 tends to 0. Note that δ(1 p τ)/(p + τ) < 1. Let M be the λ-kl martingale which begins with capital 1 and then, using selection function f, bets every turn a fraction δ of its current capital on the next bit being 1. Formally, writing T k for Tf X X (k) and i for n(vf (k)), we put M(ε) = 1 and for each k ( ) M(T k.0) = M(T k ) 1 δ, ( M(T k.1) = M(T k ) 1 δ + δ ) ( ( )) 1 b λ M(T k ) 1 + δ i p + τ 1. We do not care how M is defined elsewhere. By induction, and thus log(m(t k )) k M(T k ) #1(T k) k ( 1 + δ 1 p τ ) #1(Tk ) (1 δ) #0(T k) p + τ ( log 1 + δ 1 p τ ) + #0(T k) log (1 δ). p + τ k In the following we use standard properties of the logarithm together with our definitions of τ, δ and γ. In particular, note that if we let x = δ(1 p 14

15 τ)/(p + τ) then 0 < x < 1 so we have ln(1 + x) > x x 2 /2 > x x 2. We have ( log M(T k ) lim sup (p + 2τ) log 1 + δ 1 p τ ) + (1 p 2τ) log (1 δ) k k p + τ p + 2τ ( δ 1 p τ ) 2 (1 p τ)2 δ(1 + γ/2) δ ln 2 p + τ (p + τ) 2 (1 p 2τ) ln 2 δ ) (1 p τ)2 ((1 p τ)(1 + γ) δ ln 2 (p + τ) 2 (1 + γ/2)(1 p 2τ) ) (τ + δ ln 2 δγ (1 p τ) 2 ln 2 > 0. γ2 (1 p τ) δ (1 p τ)2 (p + τ) 2 Hence log(m(t k )) ck infinitely often, for some strictly positive constant c. Therefore our martingale is unbounded on Uf X Kolmogorov-Loveland randomness We have shown that, for computable randomness and non-monotone stochasticity, whether a string is random can depend on whether or not we have access to the actual bias of the coin. It is natural to ask if this remains true for Kolmogorov-Loveland randomness. Lemma 9 ([MMNRS06]). For sequences X, Y 2 ω, X Y is p-kl random if and only if both X is p-kl Y -random and Y is p-kl X -random, and this remains true in the Hippocratic setting (that is, where the KL martingales do not have oracle access to p). The proof is a straightforward adaptation of the proof given in [MMNRS06] (Proposition 11). Using Lemma 9 we can show an equivalence between our question and a statistical question. Theorem 10. The following two sentences are equivalent. 1. The p-hippocratic KL random and p-kl random sequences are the same. 2. From every p-hippocratic KL random sequence X, we can compute p. Proof. For (1) (2), we know that if X is p-kl random then it must satisfy the law of the iterated logarithm (see [W00] for this result). Hence we know how quickly Φ(X k) converges to p and using this we can (non-uniformly) compute p from X. For (2) (1), suppose that X is p-hippocratic KL random but not p-kl random. Let X = Y Z. Then by Lemma 9, Y (say) is not p-kl Z random, meaning that there is a KL martingale (M, f) which, given oracle access to p and Z, wins on Y. On the other hand, both Y and Z remain p-hippocratic KL random, so in particular by (2) if we have oracle access to Z then we can compute 15

16 p. But this means that we can easily convert (M, f) into a p-hippocratic KL martingale which wins on X, since to answer oracle queries to either Z or p it is enough to scan Z and do some computation. Acknowledgements The authors would like to thank Samuel Buss for his invitation to University of California, San Diego. Without his help and the university s support this paper would never exist. Taveneaux s research has been helped by a travel grant of the Fondation Sciences Mathématiques de Paris. Kjos-Hanssen s research was partially supported by NSF (USA) grant no. DMS Thapen s research was partially supported by grant IAA of GA AV ČR, by Center of Excellence CE-ITI under grant P202/12/G061 of GA CR and RVO: and by a visiting fellowship at the Isaac Newton Institute for the Mathematical Sciences in the programme Semantics and Syntax. References [BDS09] Laurent Bienvenu, David Doty, and Frank Stephan, Constructive dimension and Turing degrees. Theory of Computing Systems, Vol. 45(4), 2009, pp [BGHRS11] Laurent Bienvenu, Peter Gács, Mathieu Hoyrup, Cristóbal Rojas and Alexander Shen, Algorithmic tests and randomness with respect to a class of measures. Proceedings of the Steklov Institute of Mathematics, Vol. 271, 2011, pp [DH10] Rod G. Downey and Denis Hirschfeldt. Algorithmic Randomness and Complexity. Springer, [K10] Bjørn Kjos-Hanssen, The probability distribution as a computational resource for randomness testing. J. Log. Anal., Vol.2, 2010, issn [KTT12] Bjørn Kjos-Hanssen, Antoine Taveneaux and Neil Thapen, How much randomness is needed for statistics? Computability in Europe 2012, LNCS Vol. 7318, 2012, pp [Nie09] André Nies. Computability and Randomness. Oxford University Press, [MMNRS06] Wolfgang Merkle, Joseph S. Miller, André Nies, Jan Reimann and Frank Stephan. Kolmogorov-Loveland randomness and stochasticity. Ann. Pure Appl. Logic, Vol. 138, 2006, pp , issn [MSU98] Andrei A. Muchnik, Alexei L. Semenov and Vladimir A. Uspensky. Mathematical metaphysics of randomness. In Theoret. Comput. Sci., Vol. 207, 1998, pp , issn

17 [S86] Alexander Shen. One more definition of random sequence with respect to computable measure. In First World Congress of the Bernoulli Society on Math. Statictics and Probability theory, Tashkent, [W00] Yongge Wang. Resource bounded randomness and computational complexity. In Theoretical Computer Science, Vol. 237, 2000, pp , issn

Kolmogorov-Loveland Randomness and Stochasticity

Kolmogorov-Loveland Randomness and Stochasticity Kolmogorov-Loveland Randomness and Stochasticity Wolfgang Merkle 1 Joseph Miller 2 André Nies 3 Jan Reimann 1 Frank Stephan 4 1 Institut für Informatik, Universität Heidelberg 2 Department of Mathematics,

More information

Kolmogorov-Loveland Randomness and Stochasticity

Kolmogorov-Loveland Randomness and Stochasticity Kolmogorov-Loveland Randomness and Stochasticity Wolfgang Merkle 1, Joseph Miller 2, André Nies 3, Jan Reimann 1, and Frank Stephan 4 1 Universität Heidelberg, Heidelberg, Germany 2 Indiana University,

More information

Kolmogorov-Loveland Randomness and Stochasticity

Kolmogorov-Loveland Randomness and Stochasticity Kolmogorov-Loveland Randomness and Stochasticity Wolfgang Merkle Institut für Informatik, Ruprecht-Karls-Universität Heidelberg Joseph S. Miller Department of Mathematics, University of Connecticut, Storrs

More information

Recovering randomness from an asymptotic Hamming distance

Recovering randomness from an asymptotic Hamming distance Recovering randomness from an asymptotic Hamming distance Bjørn Kjos-Hanssen March 23, 2011, Workshop in Computability Theory @ U. San Francisco Recovering randomness from an asymptotic Hamming distance

More information

Randomness Beyond Lebesgue Measure

Randomness Beyond Lebesgue Measure Randomness Beyond Lebesgue Measure Jan Reimann Department of Mathematics University of California, Berkeley November 16, 2006 Measures on Cantor Space Outer measures from premeasures Approximate sets from

More information

TT-FUNCTIONALS AND MARTIN-LÖF RANDOMNESS FOR BERNOULLI MEASURES

TT-FUNCTIONALS AND MARTIN-LÖF RANDOMNESS FOR BERNOULLI MEASURES TT-FUNCTIONALS AND MARTIN-LÖF RANDOMNESS FOR BERNOULLI MEASURES LOGAN AXON Abstract. For r [0, 1], the Bernoulli measure µ r on the Cantor space {0, 1} N assigns measure r to the set of sequences with

More information

THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS

THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS JOSEPH S. MILLER Abstract. We call A weakly low for K if there is a c such that K A (σ) K(σ) c for infinitely many σ; in other words, there are

More information

Random Reals à la Chaitin with or without prefix-freeness

Random Reals à la Chaitin with or without prefix-freeness Random Reals à la Chaitin with or without prefix-freeness Verónica Becher Departamento de Computación, FCEyN Universidad de Buenos Aires - CONICET Argentina vbecher@dc.uba.ar Serge Grigorieff LIAFA, Université

More information

Schnorr trivial sets and truth-table reducibility

Schnorr trivial sets and truth-table reducibility Schnorr trivial sets and truth-table reducibility Johanna N.Y. Franklin and Frank Stephan Abstract We give several characterizations of Schnorr trivial sets, including a new lowness notion for Schnorr

More information

MEASURES AND THEIR RANDOM REALS

MEASURES AND THEIR RANDOM REALS MEASURES AND THEIR RANDOM REALS JAN REIMANN AND THEODORE A. SLAMAN Abstract. We study the randomness properties of reals with respect to arbitrary probability measures on Cantor space. We show that every

More information

arxiv: v2 [math.lo] 1 May 2013

arxiv: v2 [math.lo] 1 May 2013 VAN LAMBALGEN S THEOREM FOR UNIFORMLY RELATIVE SCHNORR AND COMPUTABLE RANDOMNESS arxiv:1209.5478v2 [math.lo] 1 May 2013 KENSHI MIYABE AND JASON RUTE Abstract. We correct Miyabe s proof of van Lambalgen

More information

INDEPENDENCE, RELATIVE RANDOMNESS, AND PA DEGREES

INDEPENDENCE, RELATIVE RANDOMNESS, AND PA DEGREES INDEPENDENCE, RELATIVE RANDOMNESS, AND PA DEGREES ADAM R. DAY AND JAN REIMANN Abstract. We study pairs of reals that are mutually Martin-Löf random with respect to a common, not necessarily computable

More information

LOWNESS FOR THE CLASS OF SCHNORR RANDOM REALS

LOWNESS FOR THE CLASS OF SCHNORR RANDOM REALS LOWNESS FOR THE CLASS OF SCHNORR RANDOM REALS BJØRN KJOS-HANSSEN, ANDRÉ NIES, AND FRANK STEPHAN Abstract. We answer a question of Ambos-Spies and Kučera in the affirmative. They asked whether, when a real

More information

The axiomatic power of Kolmogorov complexity

The axiomatic power of Kolmogorov complexity The axiomatic power of Kolmogorov complexity Laurent Bienvenu 1, Andrei Romashchenko 2, Alexander Shen 2, Antoine Taveneaux 1, and Stijn Vermeeren 3 1 LIAFA, CNRS & Université Paris 7 2 LIRMM, CNRS & Université

More information

The probability distribution as a computational resource for randomness testing

The probability distribution as a computational resource for randomness testing 1 13 ISSN 1759-9008 1 The probability distribution as a computational resource for randomness testing BJØRN KJOS-HANSSEN Abstract: When testing a set of data for randomness according to a probability distribution

More information

Shift-complex Sequences

Shift-complex Sequences University of Wisconsin Madison March 24th, 2011 2011 ASL North American Annual Meeting Berkeley, CA What are shift-complex sequences? K denotes prefix-free Kolmogorov complexity: For a string σ, K(σ)

More information

The complexity of stochastic sequences

The complexity of stochastic sequences The complexity of stochastic sequences Wolfgang Merkle Ruprecht-Karls-Universität Heidelberg Institut für Informatik Im Neuenheimer Feld 294 D-69120 Heidelberg, Germany merkle@math.uni-heidelberg.de Abstract

More information

ON THE COMPUTABILITY OF PERFECT SUBSETS OF SETS WITH POSITIVE MEASURE

ON THE COMPUTABILITY OF PERFECT SUBSETS OF SETS WITH POSITIVE MEASURE ON THE COMPUTABILITY OF PERFECT SUBSETS OF SETS WITH POSITIVE MEASURE C. T. CHONG, WEI LI, WEI WANG, AND YUE YANG Abstract. A set X 2 ω with positive measure contains a perfect subset. We study such perfect

More information

Measures of relative complexity

Measures of relative complexity Measures of relative complexity George Barmpalias Institute of Software Chinese Academy of Sciences and Visiting Fellow at the Isaac Newton Institute for the Mathematical Sciences Newton Institute, January

More information

EFFECTIVE BI-IMMUNITY AND RANDOMNESS

EFFECTIVE BI-IMMUNITY AND RANDOMNESS EFFECTIVE BI-IMMUNITY AND RANDOMNESS ACHILLES A. BEROS, MUSHFEQ KHAN, AND BJØRN KJOS-HANSSEN Abstract. We study the relationship between randomness and effective biimmunity. Greenberg and Miller have shown

More information

Randomness Zoo. Antoine Taveneaux. To cite this version: HAL Id: hal

Randomness Zoo. Antoine Taveneaux. To cite this version: HAL Id: hal Randomness Zoo Antoine Taveneaux To cite this version: Antoine Taveneaux. Randomness Zoo. 2012. HAL Id: hal-00850992 https://hal.archives-ouvertes.fr/hal-00850992 Submitted on 10 Aug 2013

More information

The Metamathematics of Randomness

The Metamathematics of Randomness The Metamathematics of Randomness Jan Reimann January 26, 2007 (Original) Motivation Effective extraction of randomness In my PhD-thesis I studied the computational power of reals effectively random for

More information

Computability Theory

Computability Theory Computability Theory Domination, Measure, Randomness, and Reverse Mathematics Peter Cholak University of Notre Dame Department of Mathematics Peter.Cholak.1@nd.edu http://www.nd.edu/~cholak/papers/nyc2.pdf

More information

RANDOMNESS BEYOND LEBESGUE MEASURE

RANDOMNESS BEYOND LEBESGUE MEASURE RANDOMNESS BEYOND LEBESGUE MEASURE JAN REIMANN ABSTRACT. Much of the recent research on algorithmic randomness has focused on randomness for Lebesgue measure. While, from a computability theoretic point

More information

Reconciling data compression and Kolmogorov complexity

Reconciling data compression and Kolmogorov complexity Reconciling data compression and Kolmogorov complexity Laurent Bienvenu 1 and Wolfgang Merkle 2 1 Laboratoire d Informatique Fondamentale, Université de Provence, Marseille, France, laurent.bienvenu@lif.univ-mrs.fr

More information

Layerwise computability and image randomness

Layerwise computability and image randomness Layerwise computability and image randomness Laurent Bienvenu Mathieu Hoyrup Alexander Shen arxiv:1607.04232v1 [math.lo] 14 Jul 2016 Abstract Algorithmic randomness theory starts with a notion of an individual

More information

PROBABILITY MEASURES AND EFFECTIVE RANDOMNESS

PROBABILITY MEASURES AND EFFECTIVE RANDOMNESS PROBABILITY MEASURES AND EFFECTIVE RANDOMNESS JAN REIMANN AND THEODORE A. SLAMAN ABSTRACT. We study the question, For which reals x does there exist a measure µ such that x is random relative to µ? We

More information

Time-bounded Kolmogorov complexity and Solovay functions

Time-bounded Kolmogorov complexity and Solovay functions Theory of Computing Systems manuscript No. (will be inserted by the editor) Time-bounded Kolmogorov complexity and Solovay functions Rupert Hölzl Thorsten Kräling Wolfgang Merkle Received: date / Accepted:

More information

Ten years of triviality

Ten years of triviality Ten years of triviality André Nies U of Auckland The Incomputable, Chicheley Hall, 2012 André Nies (U of Auckland) Ten years of triviality The Incomputable 1 / 19 K-trivials: synopsis During the last 10

More information

Polynomial-Time Random Oracles and Separating Complexity Classes

Polynomial-Time Random Oracles and Separating Complexity Classes Polynomial-Time Random Oracles and Separating Complexity Classes John M. Hitchcock Department of Computer Science University of Wyoming jhitchco@cs.uwyo.edu Adewale Sekoni Department of Computer Science

More information

INDEX SETS OF UNIVERSAL CODES

INDEX SETS OF UNIVERSAL CODES INDEX SETS OF UNIVERSAL CODES ACHILLES A. BEROS AND KONSTANTINOS A. BEROS Abstract. We examine sets of codes such that certain properties are invariant under the choice of oracle from a range of possible

More information

Universal probability-free conformal prediction

Universal probability-free conformal prediction Universal probability-free conformal prediction Vladimir Vovk and Dusko Pavlovic March 20, 2016 Abstract We construct a universal prediction system in the spirit of Popper s falsifiability and Kolmogorov

More information

Constructive Dimension and Weak Truth-Table Degrees

Constructive Dimension and Weak Truth-Table Degrees Constructive Dimension and Weak Truth-Table Degrees Laurent Bienvenu 1, David Doty 2 and Frank Stephan 3 1 Laboratoire d Informatique Fondamentale de Marseille, Université de Provence, 39 rue Joliot-Curie,

More information

Using random sets as oracles

Using random sets as oracles Using random sets as oracles Denis R. Hirschfeldt André Nies Department of Mathematics Department of Computer Science University of Chicago University of Auckland Frank Stephan Departments of Computer

More information

KOLMOGOROV COMPLEXITY AND ALGORITHMIC RANDOMNESS

KOLMOGOROV COMPLEXITY AND ALGORITHMIC RANDOMNESS KOLMOGOROV COMPLEXITY AND ALGORITHMIC RANDOMNESS HENRY STEINITZ Abstract. This paper aims to provide a minimal introduction to algorithmic randomness. In particular, we cover the equivalent 1-randomness

More information

Randomness and Recursive Enumerability

Randomness and Recursive Enumerability Randomness and Recursive Enumerability Theodore A. Slaman University of California, Berkeley Berkeley, CA 94720-3840 USA slaman@math.berkeley.edu Abstract One recursively enumerable real α dominates another

More information

Schnorr randomness for noncomputable measures

Schnorr randomness for noncomputable measures Schnorr randomness for noncomputable measures Jason Rute Pennsylvania State University Algorithmic Randomness Interacts with Analysis and Ergodic Theory CMO Workshop, Oaxaca 2016 Slides available at www.personal.psu.edu/jmr71/

More information

The Logical Approach to Randomness

The Logical Approach to Randomness The Logical Approach to Randomness Christopher P. Porter University of Florida UC Irvine C-ALPHA Seminar May 12, 2015 Introduction The concept of randomness plays an important role in mathematical practice,

More information

Randomness extraction and asymptotic Hamming distance

Randomness extraction and asymptotic Hamming distance Randomness extraction and asymptotic Hamming distance Cameron E. Freer and Bjørn Kjos-Hanssen Abstract We obtain a non-implication result in the Medvedev degrees by studying sequences that are close to

More information

Probabilistic Constructions of Computable Objects and a Computable Version of Lovász Local Lemma.

Probabilistic Constructions of Computable Objects and a Computable Version of Lovász Local Lemma. Probabilistic Constructions of Computable Objects and a Computable Version of Lovász Local Lemma. Andrei Rumyantsev, Alexander Shen * January 17, 2013 Abstract A nonconstructive proof can be used to prove

More information

The sum 2 KM(x) K(x) over all prefixes x of some binary sequence can be infinite

The sum 2 KM(x) K(x) over all prefixes x of some binary sequence can be infinite The sum 2 KM(x) K(x) over all prefixes x of some binary sequence can be infinite Mikhail Andreev, Akim Kumok arxiv:40.467v [cs.it] 7 Jan 204 January 8, 204 Abstract We consider two quantities that measure

More information

Some results on effective randomness

Some results on effective randomness Some results on effective randomness (Preliminary version September 2003) Wolfgang Merkle 1, Nenad Mihailović 1, and Theodore A. Slaman 2 1 Ruprecht-Karls-Universität Heidelberg, Institut für Informatik,

More information

Habilitationsschrift eingereicht bei der Fakultät für Mathematik und Informatik der Ruprecht-Karls-Universität Heidelberg

Habilitationsschrift eingereicht bei der Fakultät für Mathematik und Informatik der Ruprecht-Karls-Universität Heidelberg Habilitationsschrift eingereicht bei der Fakultät für Mathematik und Informatik der Ruprecht-Karls-Universität Heidelberg Vorgelegt von Dr. rer. nat. Wolfgang Merkle aus Horkheim bei Heilbronn am Neckar

More information

Numerical Sequences and Series

Numerical Sequences and Series Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is

More information

Randomness, probabilities and machines

Randomness, probabilities and machines 1/20 Randomness, probabilities and machines by George Barmpalias and David Dowe Chinese Academy of Sciences - Monash University CCR 2015, Heildeberg 2/20 Concrete examples of random numbers? Chaitin (1975)

More information

Algorithmic identification of probabilities is hard

Algorithmic identification of probabilities is hard Algorithmic identification of probabilities is hard Laurent Bienvenu a,, Santiago Figueira b, Benoit Monin c, Alexander Shen a a LIRMM, CNRS & Université de Montpellier, 161 rue Ada, 34095 Montpellier

More information

Algorithmic Identification of Probabilities Is Hard

Algorithmic Identification of Probabilities Is Hard Algorithmic Identification of Probabilities Is Hard Laurent Bienvenu, Benoit Monin, Alexander Shen To cite this version: Laurent Bienvenu, Benoit Monin, Alexander Shen. Algorithmic Identification of Probabilities

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds

Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds Ryan C. Harkins and John M. Hitchcock Abstract This paper extends and improves work of Fortnow and Klivans [6], who showed that if a circuit

More information

PARTIAL RANDOMNESS AND KOLMOGOROV COMPLEXITY

PARTIAL RANDOMNESS AND KOLMOGOROV COMPLEXITY The Pennsylvania State University The Graduate School Eberly College of Science PARTIAL RANDOMNESS AND KOLMOGOROV COMPLEXITY A Dissertation in Mathematics by W.M. Phillip Hudelson 2013 W.M. Phillip Hudelson

More information

Density-one Points of Π 0 1 Classes

Density-one Points of Π 0 1 Classes University of Wisconsin Madison Midwest Computability Seminar XIII, University of Chicago, October 1st, 2013 Outline 1 Definitions and observations 2 Dyadic density-one vs full density-one 3 What can density-one

More information

Cone Avoidance of Some Turing Degrees

Cone Avoidance of Some Turing Degrees Journal of Mathematics Research; Vol. 9, No. 4; August 2017 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Cone Avoidance of Some Turing Degrees Patrizio Cintioli

More information

Chaitin Ω Numbers and Halting Problems

Chaitin Ω Numbers and Halting Problems Chaitin Ω Numbers and Halting Problems Kohtaro Tadaki Research and Development Initiative, Chuo University CREST, JST 1 13 27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan E-mail: tadaki@kc.chuo-u.ac.jp Abstract.

More information

SCHNORR DIMENSION RODNEY DOWNEY, WOLFGANG MERKLE, AND JAN REIMANN

SCHNORR DIMENSION RODNEY DOWNEY, WOLFGANG MERKLE, AND JAN REIMANN SCHNORR DIMENSION RODNEY DOWNEY, WOLFGANG MERKLE, AND JAN REIMANN ABSTRACT. Following Lutz s approach to effective (constructive) dimension, we define a notion of dimension for individual sequences based

More information

There are infinitely many set variables, X 0, X 1,..., each of which is

There are infinitely many set variables, X 0, X 1,..., each of which is 4. Second Order Arithmetic and Reverse Mathematics 4.1. The Language of Second Order Arithmetic. We ve mentioned that Peano arithmetic is sufficient to carry out large portions of ordinary mathematics,

More information

arxiv: v3 [math.pr] 5 Jun 2011

arxiv: v3 [math.pr] 5 Jun 2011 arxiv:0905.0254v3 [math.pr] 5 Jun 2011 Lévy s zero-one law in game-theoretic probability Glenn Shafer, Vladimir Vovk, and Akimichi Takemura The Game-Theoretic Probability and Finance Project Working Paper

More information

Invariance Properties of Random Sequences

Invariance Properties of Random Sequences Invariance Properties of Random Sequences Peter Hertling (Department of Computer Science, The University of Auckland, Private Bag 92019, Auckland, New Zealand Email: hertling@cs.auckland.ac.nz) Yongge

More information

arxiv: v2 [math.lo] 18 May 2011

arxiv: v2 [math.lo] 18 May 2011 RANDOMNESS AND DIFFERENTIABILITY VASCO BRATTKA, JOSEPH S. MILLER, AND ANDRÉ NIES arxiv:1104.4465v2 [math.lo] 18 May 2011 Abstract. We characterize some major algorithmic randomness notions via differentiability

More information

From eventually different functions to pandemic numberings

From eventually different functions to pandemic numberings From eventually different functions to pandemic numberings Achilles A. Beros 1, Mushfeq Khan 1, Bjørn Kjos-Hanssen 1[0000 0002 6199 1755], and André Nies 2 1 University of Hawai i at Mānoa, Honolulu HI

More information

Correspondence Principles for Effective Dimensions

Correspondence Principles for Effective Dimensions Correspondence Principles for Effective Dimensions John M. Hitchcock Department of Computer Science Iowa State University Ames, IA 50011 jhitchco@cs.iastate.edu Abstract We show that the classical Hausdorff

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

Randomizing Reals and the First-Order Consequences of Randoms. Ian Robert Haken

Randomizing Reals and the First-Order Consequences of Randoms. Ian Robert Haken Randomizing Reals and the First-Order Consequences of Randoms by Ian Robert Haken A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Recursion

More information

Hierarchy among Automata on Linear Orderings

Hierarchy among Automata on Linear Orderings Hierarchy among Automata on Linear Orderings Véronique Bruyère Institut d Informatique Université de Mons-Hainaut Olivier Carton LIAFA Université Paris 7 Abstract In a preceding paper, automata and rational

More information

Limit Complexities Revisited

Limit Complexities Revisited DOI 10.1007/s00224-009-9203-9 Limit Complexities Revisited Laurent Bienvenu Andrej Muchnik Alexander Shen Nikolay Vereshchagin Springer Science+Business Media, LLC 2009 Abstract The main goal of this article

More information

Limitations of Efficient Reducibility to the Kolmogorov Random Strings

Limitations of Efficient Reducibility to the Kolmogorov Random Strings Limitations of Efficient Reducibility to the Kolmogorov Random Strings John M. HITCHCOCK 1 Department of Computer Science, University of Wyoming Abstract. We show the following results for polynomial-time

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Computability Theory

Computability Theory Computability Theory Cristian S. Calude May 2012 Computability Theory 1 / 1 Bibliography M. Sipser. Introduction to the Theory of Computation, PWS 1997. (textbook) Computability Theory 2 / 1 Supplementary

More information

Week 2: Sequences and Series

Week 2: Sequences and Series QF0: Quantitative Finance August 29, 207 Week 2: Sequences and Series Facilitator: Christopher Ting AY 207/208 Mathematicians have tried in vain to this day to discover some order in the sequence of prime

More information

UNIVERSALITY, OPTIMALITY, AND RANDOMNESS DEFICIENCY

UNIVERSALITY, OPTIMALITY, AND RANDOMNESS DEFICIENCY UNIVERSALITY, OPTIMALITY, AND RANDOMNESS DEFICIENCY RUPERT HÖLZL AND PAUL SHAFER Abstract. A Martin-Löf test U is universal if it captures all non-martin-löf random sequences, and it is optimal if for

More information

QUANTUM RANDOMNESS AND UNDERDETERMINATION

QUANTUM RANDOMNESS AND UNDERDETERMINATION QUANTUM RANDOMNESS AND UNDERDETERMINATION JEFFREY A. BARRETT AND SIMON M. HUTTEGGER Abstract. We consider the nature of quantum randomness and how one might have empirical evidence for it. We will see

More information

Universal Convergence of Semimeasures on Individual Random Sequences

Universal Convergence of Semimeasures on Individual Random Sequences Marcus Hutter - 1 - Convergence of Semimeasures on Individual Sequences Universal Convergence of Semimeasures on Individual Random Sequences Marcus Hutter IDSIA, Galleria 2 CH-6928 Manno-Lugano Switzerland

More information

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations

Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen

More information

CSE 1400 Applied Discrete Mathematics Definitions

CSE 1400 Applied Discrete Mathematics Definitions CSE 1400 Applied Discrete Mathematics Definitions Department of Computer Sciences College of Engineering Florida Tech Fall 2011 Arithmetic 1 Alphabets, Strings, Languages, & Words 2 Number Systems 3 Machine

More information

Chain Independence and Common Information

Chain Independence and Common Information 1 Chain Independence and Common Information Konstantin Makarychev and Yury Makarychev Abstract We present a new proof of a celebrated result of Gács and Körner that the common information is far less than

More information

Topics in Algorithmic Randomness and Computability Theory

Topics in Algorithmic Randomness and Computability Theory Topics in Algorithmic Randomness and Computability Theory by Michael Patrick McInerney A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor

More information

RANDOMNESS NOTIONS AND PARTIAL RELATIVIZATION

RANDOMNESS NOTIONS AND PARTIAL RELATIVIZATION RANDOMNESS NOTIONS AND PARTIAL RELATIVIZATION GEORGE BARMPALIAS, JOSEPH S. MILLER, AND ANDRÉ NIES Abstract. We study the computational complexity of an oracle set using a number of notions of randomness

More information

means is a subset of. So we say A B for sets A and B if x A we have x B holds. BY CONTRAST, a S means that a is a member of S.

means is a subset of. So we say A B for sets A and B if x A we have x B holds. BY CONTRAST, a S means that a is a member of S. 1 Notation For those unfamiliar, we have := means equal by definition, N := {0, 1,... } or {1, 2,... } depending on context. (i.e. N is the set or collection of counting numbers.) In addition, means for

More information

RANDOMNESS AND NON-ERGODIC SYSTEMS

RANDOMNESS AND NON-ERGODIC SYSTEMS RANDOMNESS AND NON-ERGODIC SYSTEMS JOHANNA N.Y. FRANKLIN AND HENRY TOWSNER Abstract. We characterize the points that satisfy Birhoff s ergodic theorem under certain computability conditions in terms of

More information

Lecture 6: The Pigeonhole Principle and Probability Spaces

Lecture 6: The Pigeonhole Principle and Probability Spaces Lecture 6: The Pigeonhole Principle and Probability Spaces Anup Rao January 17, 2018 We discuss the pigeonhole principle and probability spaces. Pigeonhole Principle The pigeonhole principle is an extremely

More information

KOLMOGOROV COMPLEXITY AND COMPUTABLY ENUMERABLE SETS

KOLMOGOROV COMPLEXITY AND COMPUTABLY ENUMERABLE SETS KOLMOGOROV COMPLEXITY AND COMPUTABLY ENUMERABLE SETS GEORGE BARMPALIAS AND ANGSHENG LI Abstract. We study the computably enumerable sets in terms of the: (a) Kolmogorov complexity of their initial segments;

More information

Program size complexity for possibly infinite computations

Program size complexity for possibly infinite computations Program size complexity for possibly infinite computations Verónica Becher Santiago Figueira André Nies Silvana Picchi Abstract We define a program size complexity function H as a variant of the prefix-free

More information

,

, Kolmogorov Complexity Carleton College, CS 254, Fall 2013, Prof. Joshua R. Davis based on Sipser, Introduction to the Theory of Computation 1. Introduction Kolmogorov complexity is a theory of lossless

More information

MATH10040: Chapter 0 Mathematics, Logic and Reasoning

MATH10040: Chapter 0 Mathematics, Logic and Reasoning MATH10040: Chapter 0 Mathematics, Logic and Reasoning 1. What is Mathematics? There is no definitive answer to this question. 1 Indeed, the answer given by a 21st-century mathematician would differ greatly

More information

ASYMPTOTIC DENSITY, COMPUTABLE TRACEABILITY, AND 1-RANDOMNESS. 1. Introduction

ASYMPTOTIC DENSITY, COMPUTABLE TRACEABILITY, AND 1-RANDOMNESS. 1. Introduction ASYMPTOTIC DENSITY, COMPUTABLE TRACEABILITY, AND 1-RANDOMNESS URI ANDREWS, MINGZHONG CAI, DAVID DIAMONDSTONE, CARL JOCKUSCH, AND STEFFEN LEMPP Abstract. Let r be a real number in the unit interval [0,

More information

THE STRENGTH OF SOME COMBINATORIAL PRINCIPLES RELATED TO RAMSEY S THEOREM FOR PAIRS

THE STRENGTH OF SOME COMBINATORIAL PRINCIPLES RELATED TO RAMSEY S THEOREM FOR PAIRS THE STRENGTH OF SOME COMBINATORIAL PRINCIPLES RELATED TO RAMSEY S THEOREM FOR PAIRS DENIS R. HIRSCHFELDT, CARL G. JOCKUSCH, JR., BJØRN KJOS-HANSSEN, STEFFEN LEMPP, AND THEODORE A. SLAMAN Abstract. We study

More information

Admin and Lecture 1: Recap of Measure Theory

Admin and Lecture 1: Recap of Measure Theory Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post

More information

Kolmogorov Complexity and Diophantine Approximation

Kolmogorov Complexity and Diophantine Approximation Kolmogorov Complexity and Diophantine Approximation Jan Reimann Institut für Informatik Universität Heidelberg Kolmogorov Complexity and Diophantine Approximation p. 1/24 Algorithmic Information Theory

More information

Density-one Points of Π 0 1 Classes

Density-one Points of Π 0 1 Classes University of Wisconsin Madison April 30th, 2013 CCR, Buenos Aires Goal Joe s and Noam s talks gave us an account of the class of density-one points restricted to the Martin-Löf random reals. Today we

More information

The Locale of Random Sequences

The Locale of Random Sequences 1/31 The Locale of Random Sequences Alex Simpson LFCS, School of Informatics University of Edinburgh, UK The Locale of Random Sequences 2/31 The Application Problem in probability What is the empirical

More information

RANDOM REALS, THE RAINBOW RAMSEY THEOREM, AND ARITHMETIC CONSERVATION

RANDOM REALS, THE RAINBOW RAMSEY THEOREM, AND ARITHMETIC CONSERVATION THE JOURNAL OF SYMBOLIC LOGIC Volume 00, Number 0, XXX 0000 RANDOM REALS, THE RAINBOW RAMSEY THEOREM, AND ARITHMETIC CONSERVATION CHRIS J. CONIDIS AND THEODORE A. SLAMAN Abstract. We investigate the question

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Time-Bounded Kolmogorov Complexity and Solovay Functions

Time-Bounded Kolmogorov Complexity and Solovay Functions Time-Bounded Kolmogorov Complexity and Solovay Functions Rupert Hölzl, Thorsten Kräling, and Wolfgang Merkle Institut für Informatik, Ruprecht-Karls-Universität, Heidelberg, Germany Abstract. A Solovay

More information

3 The language of proof

3 The language of proof 3 The language of proof After working through this section, you should be able to: (a) understand what is asserted by various types of mathematical statements, in particular implications and equivalences;

More information

THE IMPORTANCE OF Π 0 1 CLASSES IN EFFECTIVE RANDOMNESS.

THE IMPORTANCE OF Π 0 1 CLASSES IN EFFECTIVE RANDOMNESS. THE IMPORTANCE OF Π 0 1 CLASSES IN EFFECTIVE RANDOMNESS. GEORGE BARMPALIAS, ANDREW E.M. LEWIS, AND KENG MENG NG Abstract. We prove a number of results in effective randomness, using methods in which Π

More information

arxiv: v1 [math.pr] 26 Mar 2008

arxiv: v1 [math.pr] 26 Mar 2008 arxiv:0803.3679v1 [math.pr] 26 Mar 2008 The game-theoretic martingales behind the zero-one laws Akimichi Takemura 1 takemura@stat.t.u-tokyo.ac.jp, http://www.e.u-tokyo.ac.jp/ takemura Vladimir Vovk 2 vovk@cs.rhul.ac.uk,

More information

CUTS OF LINEAR ORDERS

CUTS OF LINEAR ORDERS CUTS OF LINEAR ORDERS ASHER M. KACH AND ANTONIO MONTALBÁN Abstract. We study the connection between the number of ascending and descending cuts of a linear order, its classical size, and its effective

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets

Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Cell-Probe Lower Bounds for Prefix Sums and Matching Brackets Emanuele Viola July 6, 2009 Abstract We prove that to store strings x {0, 1} n so that each prefix sum a.k.a. rank query Sumi := k i x k can

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1. Chapter 3 Sequences Both the main elements of calculus (differentiation and integration) require the notion of a limit. Sequences will play a central role when we work with limits. Definition 3.. A Sequence

More information