Kolmogorov Complexity and Diophantine Approximation Jan Reimann Institut für Informatik Universität Heidelberg Kolmogorov Complexity and Diophantine Approximation p. 1/24
Algorithmic Information Theory Algorithmic Information Theory classifies objects by their descriptive complexity. Kolmogorov Complexity and Diophantine Approximation p. 2/24
Algorithmic Information Theory Algorithmic Information Theory classifies objects by their descriptive complexity. Examples: A sequence of 10000 zeroes has low complexity, as there is a much shorter description. Any (infinite) sequence computable by an algorithm such as π has a short description (the algorithm). An outcome of an (unbiased) coin toss has high complexity. (There is no shorter description than the sequence itself.) Kolmogorov Complexity and Diophantine Approximation p. 2/24
Algorithmic Information Theory Algorithmic Information Theory classifies objects by their descriptive complexity. Examples: A sequence of 10000 zeroes has low complexity, as there is a much shorter description. Any (infinite) sequence computable by an algorithm such as π has a short description (the algorithm). An outcome of an (unbiased) coin toss has high complexity. (There is no shorter description than the sequence itself.) Rigorous Formulation: Kolmogorov complexity. Kolmogorov Complexity and Diophantine Approximation p. 2/24
Kolmogorov Complexity Kolmogorov complexity: U a universal Turing-machine. Def. for a binary string σ {0,1}, C(σ) = C U (σ) = min{ p : p {0,1}, U(p) = σ}, i.e. C(σ) is the length of the shortest program (for U) that outputs σ. Kolmogorov Complexity and Diophantine Approximation p. 3/24
Kolmogorov Complexity Kolmogorov complexity: U a universal Turing-machine. Def. for a binary string σ {0,1}, C(σ) = C U (σ) = min{ p : p {0,1}, U(p) = σ}, i.e. C(σ) is the length of the shortest program (for U) that outputs σ. Kolmogorov s invariance theorem: C is independent of U (up to a constant). Kolmogorov Complexity and Diophantine Approximation p. 3/24
Kolmogorov Complexity Kolmogorov complexity: U a universal Turing-machine. Def. for a binary string σ {0,1}, C(σ) = C U (σ) = min{ p : p {0,1}, U(p) = σ}, i.e. C(σ) is the length of the shortest program (for U) that outputs σ. Kolmogorov s invariance theorem: C is independent of U (up to a constant). The pigeonhole principle yields that for any length there are incompressible strings, C(σ) σ (in fact, most of them are). Kolmogorov Complexity and Diophantine Approximation p. 3/24
Digression: Euclid Revisited Suppose there are only finitely many primes p 1,...,p n. Kolmogorov Complexity and Diophantine Approximation p. 4/24
Digression: Euclid Revisited Suppose there are only finitely many primes p 1,...,p n. Then each number N can be factored N = p k 1 1 pk n n, each k i log N. Kolmogorov Complexity and Diophantine Approximation p. 4/24
Digression: Euclid Revisited Suppose there are only finitely many primes p 1,...,p n. Then each number N can be factored N = p k 1 1 pk n n, each k i log N. Therefore, N can be described by a program of length O(n log log N). (Identify natural numbers with their binary representation). Kolmogorov Complexity and Diophantine Approximation p. 4/24
Digression: Euclid Revisited Suppose there are only finitely many primes p 1,...,p n. Then each number N can be factored N = p k 1 1 pk n n, each k i log N. Therefore, N can be described by a program of length O(n log log N). (Identify natural numbers with their binary representation). Yields a contradiction if N is incompressible (C(N) log N). Kolmogorov Complexity and Diophantine Approximation p. 4/24
Kolmogorov complexity and Coding A prefix-free Turing machine is a TM with prefix-free domain. The prefix-free version of C (use only prefix free TMs) is denoted by K. Kolmogorov Complexity and Diophantine Approximation p. 5/24
Kolmogorov complexity and Coding A prefix-free Turing machine is a TM with prefix-free domain. The prefix-free version of C (use only prefix free TMs) is denoted by K. Kraft-Chaitin Theorem: {σ i } i N set of strings, {l i,l 2,...} sequence of natural numbers ( lengths ) such that 2 l i 1, i N then one can construct (primitive recursively) a prefix-free TM M and strings {τ i } i N, such that τ i = l i and M(τ i ) = σ i. Kolmogorov Complexity and Diophantine Approximation p. 5/24
Kolmogorov complexity and Coding Semimeasures: m : {0,1} [0, ) with m(σ) 1. σ {0,1} Kolmogorov Complexity and Diophantine Approximation p. 6/24
Kolmogorov complexity and Coding Semimeasures: m : {0,1} [0, ) with σ {0,1} m(σ) 1. There exists a maximal enumerable semimeasure m, i.e. m is enumerable from below, and for any enumerable semimeasure m it holds that m c m m for some constant c m. Kolmogorov Complexity and Diophantine Approximation p. 6/24
Kolmogorov complexity and Coding Semimeasures: m : {0,1} [0, ) with σ {0,1} m(σ) 1. There exists a maximal enumerable semimeasure m, i.e. m is enumerable from below, and for any enumerable semimeasure m it holds that m c m m for some constant c m. Coding Theorem: (Zvonkin-Levin) K(σ) = log m(σ) + c. Kolmogorov Complexity and Diophantine Approximation p. 6/24
Kolmogorov complexity and Coding Semimeasures: m : {0,1} [0, ) with σ {0,1} m(σ) 1. There exists a maximal enumerable semimeasure m, i.e. m is enumerable from below, and for any enumerable semimeasure m it holds that m c m m for some constant c m. Coding Theorem: (Zvonkin-Levin) K(σ) = log m(σ) + c. Identify randomness with incompressibility: Say an infinite binary sequence ξ is random if for all n, K(ξ n ) n c, for some constant c. Kolmogorov Complexity and Diophantine Approximation p. 6/24
Diophantine Approximation Diophantine Approximation classifies real numbers by how well they may be approximated by rational numbers. Kolmogorov Complexity and Diophantine Approximation p. 7/24
Diophantine Approximation Diophantine Approximation classifies real numbers by how well they may be approximated by rational numbers. Fundamental Theorem by Dirichlet: For any irrational α there exist infinitely many p/q (rel. prime) such that α p/q 1/q 2. (3) Kolmogorov Complexity and Diophantine Approximation p. 7/24
Diophantine Approximation Diophantine Approximation classifies real numbers by how well they may be approximated by rational numbers. Fundamental Theorem by Dirichlet: For any irrational α there exist infinitely many p/q (rel. prime) such that α p/q 1/q 2. (5) Such a series of rationals is obtained by the continued fraction expansion: α = [a 0,a 1,... ] = a 0 + a 1 + 1 a 2 + 1 1 a 3 +... a i N (6) Kolmogorov Complexity and Diophantine Approximation p. 7/24
Diophantine Approximation Diophantine Approximation classifies real numbers by how well they may be approximated by rational numbers. Fundamental Theorem by Dirichlet: For any irrational α there exist infinitely many p/q (rel. prime) such that α p/q 1/q 2. (7) Such a series of rationals is obtained by the continued fraction expansion: α = [a 0,a 1,... ] = a 0 + a 1 + 1 a 2 + 1 1 a 3 +... a i N (8) The expansion is finite if and only if α is rational. Kolmogorov Complexity and Diophantine Approximation p. 7/24
Diophantine Approximation In general, one cannot improve the factor 2 in Dirichlet s theorem. Kolmogorov Complexity and Diophantine Approximation p. 8/24
Diophantine Approximation In general, one cannot improve the factor 2 in Dirichlet s theorem. A number β is badly approximable if there exists a K such that p/q Q β p/q K/q 2. Kolmogorov Complexity and Diophantine Approximation p. 8/24
Diophantine Approximation In general, one cannot improve the factor 2 in Dirichlet s theorem. A number β is badly approximable if there exists a K such that p/q Q β p/q K/q 2. Examples of badly approximable numbers: Golden mean (1 + 5)/2 = [1,1,1,1,... ]. 0 < K < 5. 2/2 = [1,2,2,2,... ], in fact all irrational square roots. e mod 1 = [1,2,1,1,4,1,1,6,... ] not badly approximable. π mod 1 = [7,15,1,292,1,1,... ]??? Kolmogorov Complexity and Diophantine Approximation p. 8/24
Diophantine Approximation Algebraic numbers are close to badly approximable: Roth s Theorem: For any algebraic α, for any ε > 0, has only finitely many solutions. α p q 1 q 2+ε (9) Kolmogorov Complexity and Diophantine Approximation p. 9/24
Diophantine Approximation Algebraic numbers are close to badly approximable: Roth s Theorem: For any algebraic α, for any ε > 0, has only finitely many solutions. α p q 1 q 2+ε (10) However, there are numbers which are very well approximable. A Liouville number is an irrational α for which n p q α p q 1 q n. Example: 10 n!. Kolmogorov Complexity and Diophantine Approximation p. 9/24
Metric Diophantine Approximation What s the situation typically? Are most numbers rather well or rather badly approximable? Kolmogorov Complexity and Diophantine Approximation p. 10/24
Metric Diophantine Approximation What s the situation typically? Are most numbers rather well or rather badly approximable? For almost all numbers, the exponent 2 cannot be improved much: Khintchine s Theorem: Let ψ : N R + be such that lim n ψ(n) = 0. Define W ψ = {α : (p/q) α (p/q) < ψ(q)}. Then it holds, for Lebesgue measure λ, λw ψ = { 0, if kψ(k) <, 1, if kψ(k) =. Kolmogorov Complexity and Diophantine Approximation p. 10/24
Metric Diophantine Approximation What s the situation typically? Are most numbers rather well or rather badly approximable? For almost all numbers, the exponent 2 cannot be improved much: Khintchine s Theorem: Let ψ : N R + be such that lim n ψ(n) = 0. Define W ψ = {α : (p/q) α (p/q) < ψ(q)}. Then it holds, for Lebesgue measure λ, λw ψ = { 0, if kψ(k) <, 1, if kψ(k) =. So, well-approximable numbers are rather rare. Can we tell how rare? Kolmogorov Complexity and Diophantine Approximation p. 10/24
Hausdorff Measures Caratheodory-Hausdorff construction on metric spaces: let A X, X some seperable metric space, h : R R a monotone, increasing, continous on the right function with h(0) = 0, and let δ > 0. Kolmogorov Complexity and Diophantine Approximation p. 11/24
Hausdorff Measures Caratheodory-Hausdorff construction on metric spaces: let A X, X some seperable metric space, h : R R a monotone, increasing, continous on the right function with h(0) = 0, and let δ > 0. Define a set function H h δ (A) = inf { i h(diam(u i )) : A i U i, diam(u i ) δ }. Kolmogorov Complexity and Diophantine Approximation p. 11/24
Hausdorff Measures Caratheodory-Hausdorff construction on metric spaces: let A X, X some seperable metric space, h : R R a monotone, increasing, continous on the right function with h(0) = 0, and let δ > 0. Define a set function H h δ (A) = inf { i h(diam(u i )) : A i U i, diam(u i ) δ }. Letting δ 0 yields an (outer) measure. Kolmogorov Complexity and Diophantine Approximation p. 11/24
Hausdorff Measures Caratheodory-Hausdorff construction on metric spaces: let A X, X some seperable metric space, h : R R a monotone, increasing, continous on the right function with h(0) = 0, and let δ > 0. Define a set function H h δ (A) = inf { i h(diam(u i )) : A i U i, diam(u i ) δ }. Letting δ 0 yields an (outer) measure. The h-dimensional Hausdorff measure H h is defined as H h (A) = lim δ 0 H h δ (A) Kolmogorov Complexity and Diophantine Approximation p. 11/24
Properties of Hausdorff Measures H h is Borel regular: all Borel sets are measurable and for every Y X there is a Borel set B Y such that H h (B) = H h (Y). Kolmogorov Complexity and Diophantine Approximation p. 12/24
Properties of Hausdorff Measures H h is Borel regular: all Borel sets are measurable and for every Y X there is a Borel set B Y such that H h (B) = H h (Y). An obvious choice for h is h(x) = x s for some s 0. For such h, denote the corresponding Hausdorff measure by H s. Kolmogorov Complexity and Diophantine Approximation p. 12/24
Properties of Hausdorff Measures H h is Borel regular: all Borel sets are measurable and for every Y X there is a Borel set B Y such that H h (B) = H h (Y). An obvious choice for h is h(x) = x s for some s 0. For such h, denote the corresponding Hausdorff measure by H s. For s = 1, H 1 is the usual Lebesgue measure λ on2 N. Kolmogorov Complexity and Diophantine Approximation p. 12/24
Properties of Hausdorff Measures H h is Borel regular: all Borel sets are measurable and for every Y X there is a Borel set B Y such that H h (B) = H h (Y). An obvious choice for h is h(x) = x s for some s 0. For such h, denote the corresponding Hausdorff measure by H s. For s = 1, H 1 is the usual Lebesgue measure λ on2 N. For 0 s < t < and Y X, H s (Y) < implies H t (Y) = 0, H t (Y) > 0 implies H s (Y) =. Kolmogorov Complexity and Diophantine Approximation p. 12/24
Hausdorff dimension The situation is depicted in the following graph: Kolmogorov Complexity and Diophantine Approximation p. 13/24
Hausdorff dimension The situation is depicted in the following graph: The Hausdorff dimension of A is dim H (A) = inf{s 0 : H s (A) = 0} = sup{t 0 : H t (A) = } Kolmogorov Complexity and Diophantine Approximation p. 13/24
Hausdorff Dimension and Diophantine Approximation Let W δ = W ψ for ψ(q) = 1 q δ. Kolmogorov Complexity and Diophantine Approximation p. 14/24
Hausdorff Dimension and Diophantine Approximation Let W δ = W ψ for ψ(q) = 1 q δ. So, the Liouville numbers are just the ones contained in every W n. Kolmogorov Complexity and Diophantine Approximation p. 14/24
Hausdorff Dimension and Diophantine Approximation Let W δ = W ψ for ψ(q) = 1 q δ. So, the Liouville numbers are just the ones contained in every W n. Jarnik-Besicovitch Theorem: For δ 2, dim H W δ = 2/δ. Kolmogorov Complexity and Diophantine Approximation p. 14/24
Hausdorff Dimension and Diophantine Approximation Let W δ = W ψ for ψ(q) = 1 q δ. So, the Liouville numbers are just the ones contained in every W n. Jarnik-Besicovitch Theorem: For δ 2, dim H W δ = 2/δ. In particular, the set of Liouville numbers has dimension zero. Kolmogorov Complexity and Diophantine Approximation p. 14/24
Random Continued Fractions Other questions: What does a typical continued fraction look like? Do the components reflect some approximation properties? Kolmogorov Complexity and Diophantine Approximation p. 15/24
Random Continued Fractions Other questions: What does a typical continued fraction look like? Do the components reflect some approximation properties? Interesting from a different point of view: What is the complexity of a function from natural numbers to natural numbers? Kolmogorov Complexity and Diophantine Approximation p. 15/24
Random Continued Fractions Other questions: What does a typical continued fraction look like? Do the components reflect some approximation properties? Interesting from a different point of view: What is the complexity of a function from natural numbers to natural numbers? In the following, identify an initial segment [a 1,...,a n ] of a continued fraction with the n-convergent p n q n = a 1 + 1 1 1 a 2 + a 3 +... 1 a n. Kolmogorov Complexity and Diophantine Approximation p. 15/24
Random Continued Fractions Using the ergodic theorem, one can show, for instance, that 1 n log q n π2 12 log 2 for almost every α. Kolmogorov Complexity and Diophantine Approximation p. 16/24
Random Continued Fractions Using the ergodic theorem, one can show, for instance, that for almost every α. 1 n log q n Entropy of the Gauss map x 1 x π2 12 log 2 mod 1. Kolmogorov Complexity and Diophantine Approximation p. 16/24
Measure on Baire Space Basic open cylinders: convergents of continued fractions, [a 0,...a n ]. Kolmogorov Complexity and Diophantine Approximation p. 17/24
Measure on Baire Space Basic open cylinders: convergents of continued fractions, [a 0,...a n ]. The diameter of these (as a subset of [0,1]) can be determined: diam[a 0,...a n ] = 1 q n (q n + q n 1 ) Kolmogorov Complexity and Diophantine Approximation p. 17/24
Measure on Baire Space Basic open cylinders: convergents of continued fractions, [a 0,...a n ]. The diameter of these (as a subset of [0,1]) can be determined: diam[a 0,...a n ] = 1 q n (q n + q n 1 ) Gives rise to a Borel measure in the usual way (extension from algebra to σ-algebra). Equivalently, Hausdorff measures. Kolmogorov Complexity and Diophantine Approximation p. 17/24
Effective (Hausdorff) measure Introduce effective coverings and define the notion of effective H s -measure 0. Kolmogorov Complexity and Diophantine Approximation p. 18/24
Effective (Hausdorff) measure Introduce effective coverings and define the notion of effective H s -measure 0. Let X = 2 N or X = N N, let s 0 be rational. A X is Σ 1 -H s null, Σ 0 1 -Hs (A) = 0, if there is a recursive sequence (C n ) of enumerable sets such that for each n, A w C n [w] and w C n diam[w] < 2 n. Kolmogorov Complexity and Diophantine Approximation p. 18/24
Effective (Hausdorff) measure Introduce effective coverings and define the notion of effective H s -measure 0. Let X = 2 N or X = N N, let s 0 be rational. A X is Σ 1 -H s null, Σ 0 1 -Hs (A) = 0, if there is a recursive sequence (C n ) of enumerable sets such that for each n, A w C n [w] and w C n diam[w] < 2 n. For s = 1, one obtains an effective vesion of Lebesgue measure λ. Kolmogorov Complexity and Diophantine Approximation p. 18/24
Effective (Hausdorff) measure Introduce effective coverings and define the notion of effective H s -measure 0. Let X = 2 N or X = N N, let s 0 be rational. A X is Σ 1 -H s null, Σ 0 1 -Hs (A) = 0, if there is a recursive sequence (C n ) of enumerable sets such that for each n, A w C n [w] and w C n diam[w] < 2 n. For s = 1, one obtains an effective vesion of Lebesgue measure λ. Theorem: (Schnorr) A sequence ξ 2 N is random if and only if {ξ} is not Σ 1 -λ null. Kolmogorov Complexity and Diophantine Approximation p. 18/24
Random Continued Fractions Does a similar characterization hold for continued fractions? Kolmogorov Complexity and Diophantine Approximation p. 19/24
Random Continued Fractions Does a similar characterization hold for continued fractions? Theorem: (Reimann, Gacs) A continued fraction α is not Σ 1 -λ-null if and only if sup{ K( a 1,...,a n ) log λ[a 1,...,a n ]} < n Kolmogorov Complexity and Diophantine Approximation p. 19/24
Random Continued Fractions Is the real represented by a random binary sequence (via its dyadic expansion) also a random continued fraction (in terms of Σ 1 -measure)? Kolmogorov Complexity and Diophantine Approximation p. 20/24
Random Continued Fractions Is the real represented by a random binary sequence (via its dyadic expansion) also a random continued fraction (in terms of Σ 1 -measure)? Problem: The continued fraction expansion might code things more efficiently. Kolmogorov Complexity and Diophantine Approximation p. 20/24
Random Continued Fractions Surprisingly, it does not. Theorem: (Lochs) For ξ 2 N, denote by π n (ξ) the number of partial convergents a i of the continued fraction expansion of ξ obtained from the first n digits of ξ. Then it holds for almost every ξ, lim n π n (ξ) n = 6 log2 2 π 2. Kolmogorov Complexity and Diophantine Approximation p. 21/24
Random Continued Fractions Surprisingly, it does not. Theorem: (Lochs) For ξ 2 N, denote by π n (ξ) the number of partial convergents a i of the continued fraction expansion of ξ obtained from the first n digits of ξ. Then it holds for almost every ξ, lim n π n (ξ) n = 6 log2 2 π 2. Lochs Theorem holds effectively, that is, for any random ξ. Kolmogorov Complexity and Diophantine Approximation p. 21/24
Random Continued Fractions Surprisingly, it does not. Theorem: (Lochs) For ξ 2 N, denote by π n (ξ) the number of partial convergents a i of the continued fraction expansion of ξ obtained from the first n digits of ξ. Then it holds for almost every ξ, lim n π n (ξ) n = 6 log2 2 π 2. Lochs Theorem holds effectively, that is, for any random ξ. Proof requires some effort to avoid the use of the ergodic theorem (which is not an effective law of probability). Kolmogorov Complexity and Diophantine Approximation p. 21/24
Random Continued Fractions Every random continued fraction satisfies 1 n log q n π2 12 log 2. Kolmogorov Complexity and Diophantine Approximation p. 22/24
Random Continued Fractions Every random continued fraction satisfies 1 n log q n π2 12 log 2. The proof also yields that a random c.f. α = [a 1,a 2,...] must have arbitrary large partial convergents a i. Kolmogorov Complexity and Diophantine Approximation p. 22/24
Random Continued Fractions Every random continued fraction satisfies 1 n log q n π2 12 log 2. The proof also yields that a random c.f. α = [a 1,a 2,...] must have arbitrary large partial convergents a i. Theorem: An irrational α is badly approximable if and only if its continued fraction expansion is bounded. Kolmogorov Complexity and Diophantine Approximation p. 22/24
Random Continued Fractions Every random continued fraction satisfies 1 n log q n π2 12 log 2. The proof also yields that a random c.f. α = [a 1,a 2,...] must have arbitrary large partial convergents a i. Theorem: An irrational α is badly approximable if and only if its continued fraction expansion is bounded. Hence, from our point of view, badly approximable numbers must be compressible. Kolmogorov Complexity and Diophantine Approximation p. 22/24
Effective Hausdorff Dimension Based on effective measure, we can also define effective Hausdorff dimension: dim 1 A = inf{s 0 : Σ 0 1 -Hs A = 0}. Kolmogorov Complexity and Diophantine Approximation p. 23/24
Effective Hausdorff Dimension Based on effective measure, we can also define effective Hausdorff dimension: dim 1 A = inf{s 0 : Σ 0 1 -Hs A = 0}. Yields a notion of Hausdorff dimension for individual sequences. Kolmogorov Complexity and Diophantine Approximation p. 23/24
Effective Hausdorff Dimension Based on effective measure, we can also define effective Hausdorff dimension: dim 1 A = inf{s 0 : Σ 0 1 -Hs A = 0}. Yields a notion of Hausdorff dimension for individual sequences. It turns out that the effective dimension of a set is determined by the dimension of its most complex members: dim 1 A = sup ξ A dim 1 ξ. Kolmogorov Complexity and Diophantine Approximation p. 23/24
Effective Hausdorff Dimension Based on effective measure, we can also define effective Hausdorff dimension: dim 1 A = inf{s 0 : Σ 0 1 -Hs A = 0}. Yields a notion of Hausdorff dimension for individual sequences. It turns out that the effective dimension of a set is determined by the dimension of its most complex members: dim 1 A = sup ξ A dim 1 ξ. Theorem: For any sequence ξ 2 N it holds that dim 1 ξ = lim inf n K(ξ n ) n. Kolmogorov Complexity and Diophantine Approximation p. 23/24
Badly approximable numbers Badly approximable numbers are compressible. Kolmogorov Complexity and Diophantine Approximation p. 24/24
Badly approximable numbers Badly approximable numbers are compressible. Jarnik s Theorem: Let E n = {α = [a 0,a 1,...] [0,1] : ia i n}. Then, for n 8, 1 4 nlog 2 dim H E n 1 1 8nlog n. Kolmogorov Complexity and Diophantine Approximation p. 24/24
Badly approximable numbers Badly approximable numbers are compressible. Jarnik s Theorem: Let E n = {α = [a 0,a 1,...] [0,1] : ia i n}. Then, for n 8, 1 4 nlog 2 dim H E n 1 1 8nlog n. Later, Bumby and Hensley gave good approximations of dim H E n for smaller values of n, e.g. dim H E 2 = 0.5312050.... Kolmogorov Complexity and Diophantine Approximation p. 24/24
Badly approximable numbers Badly approximable numbers are compressible. Jarnik s Theorem: Let E n = {α = [a 0,a 1,...] [0,1] : ia i n}. Then, for n 8, 1 4 nlog 2 dim H E n 1 1 8nlog n. Later, Bumby and Hensley gave good approximations of dim H E n for smaller values of n, e.g. dim H E 2 = 0.5312050.... Using complexity theoretic characterization of dimension, we can give a simpler, essentially combinatorial proof. Kolmogorov Complexity and Diophantine Approximation p. 24/24