Density-one Points of Π 0 1 Classes

Similar documents
Density-one Points of Π 0 1 Classes

Shift-complex Sequences

arxiv: v1 [math.fa] 14 Jul 2018

18.175: Lecture 2 Extension theorems, random variables, distributions

Measures. 1 Introduction. These preliminary lecture notes are partly based on textbooks by Athreya and Lahiri, Capinski and Kopp, and Folland.

THE K-DEGREES, LOW FOR K DEGREES, AND WEAKLY LOW FOR K SETS

RANDOMNESS NOTIONS AND PARTIAL RELATIVIZATION

RESEARCH STATEMENT: MUSHFEQ KHAN

Measure and integration

STRUCTURES WITH MINIMAL TURING DEGREES AND GAMES, A RESEARCH PROPOSAL

INDEPENDENCE, RELATIVE RANDOMNESS, AND PA DEGREES

Some results on algorithmic randomness and computability-theoretic strength

Monotonically Computable Real Numbers

Randomness Beyond Lebesgue Measure

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

10.4 Comparison Tests

Degree Spectra of Relations on Ordinals

Random Reals à la Chaitin with or without prefix-freeness

The Metamathematics of Randomness

Classification and Measure for Algebraic Fields

Computability Theory

Lebesgue measure and integration

Part 2 Continuous functions and their properties

Asymptotic density and the coarse computability bound

0 Real Analysis - MATH20111

Effective randomness and computability

Muchnik Degrees: Results and Techniques

The Caratheodory Construction of Measures

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

Math 3361-Modern Algebra Lecture 08 9/26/ Cardinality

STABILITY AND POSETS

LOWNESS FOR BOUNDED RANDOMNESS

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Water tank. Fortunately there are a couple of objectors. Why is it straight? Shouldn t it be a curve?

Construction of a general measure structure

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

FUNDAMENTALS OF REAL ANALYSIS by. II.1. Prelude. Recall that the Riemann integral of a real-valued function f on an interval [a, b] is defined as

Ten years of triviality

TT-FUNCTIONALS AND MARTIN-LÖF RANDOMNESS FOR BERNOULLI MEASURES

Hausdorff Measures and Perfect Subsets How random are sequences of positive dimension?

From eventually different functions to pandemic numberings

On universal instances of principles in reverse mathematics

Kolmogorov-Loveland Randomness and Stochasticity

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

1 Measurable Functions

Notes on ordinals and cardinals

Abstract Measure Theory

Metric spaces and metrizability

Two notes on subshifts

Lecture 22: Quantum computational complexity

LOWNESS FOR THE CLASS OF SCHNORR RANDOM REALS

RANDOMNESS BEYOND LEBESGUE MEASURE

GENERIC COMPUTABILITY, TURING DEGREES, AND ASYMPTOTIC DENSITY

Proving languages to be nonregular

Hyperreal Calculus MAT2000 Project in Mathematics. Arne Tobias Malkenes Ødegaard Supervisor: Nikolai Bjørnestøl Hansen

EFFECTIVE BI-IMMUNITY AND RANDOMNESS

Synchronization, Chaos, and the Dynamics of Coupled Oscillators. Supplemental 1. Winter Zachary Adams Undergraduate in Mathematics and Biology

The Logical Approach to Randomness

Schnorr trivial sets and truth-table reducibility

Alan Turing s Contributions to the Sciences

Measure and Integration: Concepts, Examples and Exercises. INDER K. RANA Indian Institute of Technology Bombay India

Appendix E : Note on regular curves in Euclidean spaces

Computability Theory

The triple helix. John R. Steel University of California, Berkeley. October 2010

Randomness for capacities

VC-DENSITY FOR TREES

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note 3

KOLMOGOROV COMPLEXITY AND ALGORITHMIC RANDOMNESS

Behaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline

Chapter 4. Measure Theory. 1. Measure Spaces

MT101-03, Area Notes. We briefly mentioned the following triumvirate of integration that we ll be studying this semester:

Randomness, probabilities and machines

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 3

DEGREES OF RANDOM SETS. A Dissertation. Presented to the Faculty of the Graduate School. of Cornell University

One-to-one functions and onto functions

Measures and Measure Spaces

Reconciling data compression and Kolmogorov complexity

MAT137 - Term 2, Week 2

TEMPERATURE THEORY AND THE THERMOSTATIC STRATEGY

Calculus (Math 1A) Lecture 5

arxiv: v1 [math.lo] 4 Nov 2011

1 Continuity and Limits of Functions

We set up the basic model of two-sided, one-to-one matching

A Short Journey Through the Riemann Integral

Guide to Proofs on Sets

Settling Time Reducibility Orderings

Welcome. to Electrostatics

Where do pseudo-random generators come from?

Kolmogorov-Loveland Randomness and Stochasticity

Simple groups and the classification of finite groups

Consequences of the Completeness Property

6 Cosets & Factor Groups

CS 301. Lecture 18 Decidable languages. Stephen Checkoway. April 2, 2018

Improved Direct Product Theorems for Randomized Query Complexity

TAYLOR POLYNOMIALS DARYL DEFORD

Some Background Math Notes on Limsups, Sets, and Convexity

Cauchy Sequences. x n = 1 ( ) 2 1 1, . As you well know, k! n 1. 1 k! = e, = k! k=0. k = k=1

MORE ON CONTINUOUS FUNCTIONS AND SETS

Measures. Chapter Some prerequisites. 1.2 Introduction

Transcription:

University of Wisconsin Madison April 30th, 2013 CCR, Buenos Aires

Goal Joe s and Noam s talks gave us an account of the class of density-one points restricted to the Martin-Löf random reals. Today we will extend this picture by saying a little about how they behave off the randoms.

Dyadic density-one points We use the symbol µ to refer exclusively to the standard Lebesgue measure on Cantor space. Given σ 2 <ω and a measurable set C 2 ω, the shorthand µ σ(c) denotes the relative measure of C in the cone above σ, i.e., µ σ(c) = µ([σ] C). µ([σ]) Definition Let C be a measurable set and X a real. The lower dyadic density of C at X, written ρ 2(C X), is lim inf µ X n (C). n Definition A real X is a dyadic positive density point if for every Π 0 1 class C containing X, ρ 2(C X) > 0. It is a dyadic density-one point if for every Π 0 1 class C containing X, ρ 2(C X) = 1.

Full density-one points Our interest in density-one points originally developed in the setting of the real line. They arose in the study of effective versions of the Lebesgue Density Theorem, in the following form: Definition Let C be a measurable subset of R and x R. The lower (full) density of C at x, written ρ(c x), is µ((x γ, x + δ) C) lim inf. γ,δ 0 + γ + δ Definition We say x [0, 1] is a positive density point if for every Π 0 1 class C [0, 1] containing x, ρ(c x) > 0. It is a (full) density-one point if for every Π 0 1 class C [0, 1] containing x, ρ(c x) = 1.

On the Martin-Löf randoms Recall two results from Joe s talk: Theorem (Bienvenu, Hölzl, Miller, Nies) If X is Martin-Löf random, then X is a positive density point if and only if it is incomplete. Theorem (Day, Miller) There is a Martin-Löf random real that is a positive density point (hence incomplete) but not a density-one point.

Some observations Dyadic positive density points (and hence full positive density points) are Kurtz random. 1-generics are full density-one points. Not being a full density-one point is a Π 0 2 property. Therefore, all weak 2-random reals are full density-one points. Note that any hyperimmune-free Kurtz random is weak 2-random (Yu). The two halves of a dyadic density-one point are dyadic density-one. In fact, any computable sampling of a dyadic density-one point is a dyadic density-one point. Likewise for full density-one points. There is a Kurtz random real that is not Martin-Löf random and not a density-one point. Consider Ω G where G is weakly 2-generic.

The resulting picture Complete, density 0

Dyadic density-one vs full density-one It s easy to exhibit a specific C and an X such that ρ 2(C X) ρ(c X). But is this discrepancy eliminated if we require that for every Π 0 1 class C containing X, ρ 2(C X) = 1? In other words, are dyadic density-one points the same as full density-one points? On the Martin-Löf randoms, yes: Theorem (K., Miller) Let X be Martin-Löf random. Then X is a dyadic density-one point if and only if it is a full density-one point. Some amount of randomness is necessary: Proposition (K.) There is a dyadic density-one point that is not full density-one. We build a dyadic density-one point Y by computable approximation, while building a Σ 0 1 class B such that ρ( B Y) < 1.

Dyadic density-one vs full density-one (contd.) The basic idea: Put into B [ ] We shall be free to choose j as large as we want. Note that [σ] is the smallest dyadic cone containing Y that can see [σ01 j ], the hole that we create in B, and relative to σ, this hole appears small. However, on the real line, at a certain scale around Y, the hole is quite large.

Dyadic density-one vs full density-one (contd.) We want to place these holes infinitely often along Y, and this constitutes one type of requirement. Making Y a dyadic density-one point amounts to ensuring that for each Σ 0 1 class [W e], either 1 Y [W e], or 2 the relative measure of [W e] along Y goes to 0. The basic strategy for meeting a density requirement is to reroute Y to enter [W e] if its measure becomes too big above some initial segment of Y s. To make this play well with our hole-placing strategy, we keep the measure of B above initial segments of Y s very small. Then if [W e] becomes big enough, we can enter it while keeping B very small along Y s.

Dyadic density-one vs full density-one (contd.) We want to place these holes infinitely often along Y, and this constitutes one type of requirement. Making Y a dyadic density-one point amounts to ensuring that for each Σ 0 1 class [W e], either 1 Y [W e], or 2 the relative measure of [W e] along Y goes to 0. The basic strategy for meeting a density requirement is to reroute Y to enter [W e] if its measure becomes too big above some initial segment of Y s. To make this play well with our hole-placing strategy, we keep the measure of B above initial segments of Y s very small. Then if [W e] becomes big enough, we can enter it while keeping B very small along Y s.

Dyadic density-one vs full density-one (contd.) We want to place these holes infinitely often along Y, and this constitutes one type of requirement. Making Y a dyadic density-one point amounts to ensuring that for each Σ 0 1 class [W e], either 1 Y [W e], or 2 the relative measure of [W e] along Y goes to 0. The basic strategy for meeting a density requirement is to reroute Y to enter [W e] if its measure becomes too big above some initial segment of Y s. To make this play well with our hole-placing strategy, we keep the measure of B above initial segments of Y s very small. Then if [W e] becomes big enough, we can enter it while keeping B very small along Y s.

Dyadic density-one vs full density-one (contd.) The Dyadic Density Drop Covering Lemma makes this intuition precise: Lemma Suppose B 2 ω is open. Then for any ε such that µ(b) ε 1, let U ε(b) denote the set {X 2 ω : µ ρ(b) ε for some ρ X}. Then U ε(b) is open and µ(u ε(b)) µ(b)/ε. (Joe proved this in his talk.) The lemma tells us exactly how small we have to keep B along Y s to make it possible to act for multiple density requirements. Each time we reroute Y s to enter a Σ 0 1 class we get a little closer to B, but still remain far enough away so that we can act on behalf on another, higher priority density requirement if the need arises.

Dyadic density-one vs full density-one (contd.) Interleave hole-placing requirements with density requirements by progressively building a better and better approximation to a dyadic density-one point. Formally, to meet the requirement D e,k between σ and σ where σ σ Y is to ensure that either σ [W e] or the measure of [W e] between σ and σ is bounded by 2 k (i.e., for every τ between σ and σ, µ τ ([W e]) 2 k ). We organize the construction as follows: D e,k has higher priority than D e,k, for e > e. Above σ k,s, we only act for the sake of D e,k if we haven t acted for the sake of a higher priority density requirement above σ k,s. In sum, we have a finite-injury priority construction, where for each e, cofinitely many of the D e,k requirements will be satisfied. There are some details to work out, but they re routine.

Computational strength 1-generics are GL 1, therefore incomplete. By the theorem of Bienvenu et al., Martin-Löf random density-one points are also incomplete. But in general, density-one points can be complete. In fact, every real is computable from a full density-one point: Theorem (K.) For every X 2 ω, there is a full density-one point Y such that X T Y T X 0. Because dyadic density is so much easier to work with, I ll first sketch the proof of the result for dyadic density. Even though the statement of the theorem bears a superficial resemblance to the Ku cera-gács Theorem, the method is different. For one thing, there is no Π 0 1 class consisting exclusively of density-one points. Also note that we don t get a wtt reduction as in the Ku cera-gács Theorem.

Computational strength (contd.) Basic idea: Combine the density strategy of the previous proof with coding, on a tree. By computable approximation, we build a 0 2 function tree F : 2 <ω 2 <ω and a functional Γ such that for every σ 2 <ω, Γ F(σ) = σ.

Computational strength (contd.) To set F s(σ) = τ at stage s is to code σ at the string τ. We need to ensure that we can always do this in a consistent manner. There are two ways this could go wrong: τ codes incorrectly (i.e., Γ τ σ), or τ codes too much (i.e. Γ τ properly extends σ). For example: We cannot route F s(1) through the current or previous values of F s(10), F s(11) and F s(0).

Computational strength (contd.) In general, for every nonempty string σ, there is a Σ 0 1 class B σ,s that F s(σ) must avoid, and a threshold β σ,s below which we must keep the measure of B σ,s between F s(σ ) and F s(σ), where σ is the immediate predecessor of σ. The strategies must cooperate to maintain this condition. For example, if σ = α0, then the strategies controlling F s(σ0), F s(σ1) and F s(α1), all of which contribute measure to B σ,s, must maintain the fact that µ(b σ,s) remains strictly below β σ,s between F s(α) and F s(σ). All of this is completely within our control, since we can code on arbitrarily long strings. For each X 2 ω, the construction of σ X F(σ) is again a finite-injury priority construction. The details are easy to work out.

Real line issues We briefly outline some of the difficulties in transferring the coding theorem for dyadic density-one points to full density-one points. Strategies can no longer restrict their attention to dyadic cones: [W e] is very small relative to F s(σ), but it poses a threat to the path we re building.

Real line issues (contd.) We build a tree {I σ : σ 2 <ω } of intervals with dyadic rational endpoints. Suppose I I are intervals in [0, 1] and C is a measurable set. We say that µ(c) is below ε between I and I if for every interval L such that I L I, µ L(C) < ε. In previous proofs, it was easy to chop up density requirements into smaller pieces such that the individual wins added up nicely. This is a little messier on the real line: Here µ([w e]) < 1/8 between I 0 and I 1 and also between I 1 and I 2, but not between I 0 and I 2.

Real line issues (contd.) So we watch for density drops on overlapping intervals. At every stage s, we maintain a proper subinterval J s(σ) of I s(σ) within which I s(σ0) and I s(σ1) reside. When the strategy in control of one of these intervals, say I s(σ0), acts, it is allowed to move I s(σ0) outside J s(σ), in which case we expand J s(σ) to an interval J s+1(σ) that contains the new interval I s+1(σ0).

Real line issues (contd.) So we watch for density drops on overlapping intervals. At every stage s, we maintain a proper subinterval J s(σ) of I s(σ) within which I s(σ0) and I s(σ1) reside. When the strategy in control of one of these intervals, say I s(σ0), acts, it is allowed to move I s(σ0) outside J s(σ), in which case we expand J s(σ) to an interval J s+1(σ) that contains the new interval I s+1(σ0).

Real line issues (contd.) There is a version of the Density Drop Covering Lemma for the real line: Lemma (Bienvenu, Hölzl, Miller, Nies) Suppose B [0, 1] is open. Then for any ε such that µ(b) ε 1, let U ε(b) denote the set {X [0, 1] : an interval I, X I, and µ I(B) ε}. Then µ(u ε(b)) 2µ(B)/ε. We have to be slightly careful when applying this lemma for our construction. When we relativize this lemma to an interval L, we obtain a bound for the measure of U ε(b L) within L, but in general, we are also concerned about the part of B that lies outside L. Fortunately, under the assumptions of the construction, we can obtain a bound for the measure threatened by all of B. We skip the details. On to the next topic...

The minimality question There is a Kurtz random real of minimal degree. For example, every hyperimmune degree contains a weakly 1-generic (hence Kurtz random) set, and there is a minimal hyperimmune degree. Question Is there a density-one point of minimal degree? Of course, 1-generics and 1-randoms cannot be minimal, since for any real A B with either property, A and B are Turing incomparable. One might hope to prove the same for density-one points.

The minimality question (contd.) However, we think we can modify the 0 2 construction of a density-one point with building a pair of Turing reductions between the two halves of the real: Conjecture (K.) There is a density-one point A B with A T B. One approach is to try to show that some kind of computational strength that is compatible with minimality suffices to compute a density-one point. Even though highness is insufficient to compute a 1-generic, perhaps highness, or even hyperimmunity, implies the ability to compute a density-one point. Note that a minimal density-one point would have to be hyperimmune.

More questions Every density 0 Martin-Löf random real is complete. Is there an incomplete Kurtz random real that is also density 0? Non-Martin-Löf random density-one points are hyperimmune. Do such points compute 1-generics? 2-random reals compute 1-generics (Kurtz; Kautz). On the other hand, there is a DNC function relative to 0 of minimal degree (K.). Does every DNC function relative to 0 compute a density-one point? Can a density-one point wtt-compute 0? Are weakly 1-generics density-one points? Do PA degrees compute density-one points?

Thanks!