Entropy dimensions and a class of constructive examples

Similar documents
A construction of strictly ergodic subshifts having entropy dimension (joint with K. K. Park and J. Lee (Ajou Univ.))

A NOTE ON MIXING PROPERTIES OF INVERTIBLE EXTENSIONS. 1. Invertible Extensions

A dyadic endomorphism which is Bernoulli but not standard

A ZERO ENTROPY T SUCH THAT THE [T,ID] ENDOMORPHISM IS NONSTANDARD

Rotation set for maps of degree 1 on sun graphs. Sylvie Ruette. January 6, 2019

Rank and symbolic complexity

Automata on linear orderings

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

CONVERGENCE OF POLYNOMIAL ERGODIC AVERAGES

Trajectories of rotations

MATH 117 LECTURE NOTES

Pascal Ochem 1 and Elise Vaslet Introduction REPETITION THRESHOLDS FOR SUBDIVIDED GRAPHS AND TREES

Problem set 1, Real Analysis I, Spring, 2015.

INITIAL POWERS OF STURMIAN SEQUENCES

BERNARD HOST AND BRYNA KRA

THE REAL NUMBERS Chapter #4

Free Subgroups of the Fundamental Group of the Hawaiian Earring

Some improvments of the S-adic Conjecture

Hierarchy among Automata on Linear Orderings

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

MEASURE-THEORETIC ENTROPY

SYNCHRONOUS RECURRENCE. Contents

II - REAL ANALYSIS. This property gives us a way to extend the notion of content to finite unions of rectangles: we define

A topological semigroup structure on the space of actions modulo weak equivalence.

REVERSALS ON SFT S. 1. Introduction and preliminaries

Quantitative recurrence for beta expansion. Wang BaoWei

Nilpotency and Limit Sets of Cellular Automata

P-adic Functions - Part 1

IRRATIONAL ROTATION OF THE CIRCLE AND THE BINARY ODOMETER ARE FINITARILY ORBIT EQUIVALENT

16 1 Basic Facts from Functional Analysis and Banach Lattices

4 Countability axioms

Rudiments of Ergodic Theory

THE AUTOMORPHISM GROUP OF A SHIFT OF LINEAR GROWTH: BEYOND TRANSITIVITY

Some Results Concerning Uniqueness of Triangle Sequences

Lecture 5 - Hausdorff and Gromov-Hausdorff Distance

S-adic sequences A bridge between dynamics, arithmetic, and geometry

arxiv:math/ v1 [math.at] 13 Nov 2001

Krieger s finite generator theorem for ergodic actions of countable groups

RATE OF APPROXIMATION OF MINIMIZING MEASURES

Chacon maps revisited

Metric Spaces and Topology

Note on Vertex-Disjoint Cycles

Part III. 10 Topological Space Basics. Topological Spaces

Lecture 4. Entropy and Markov Chains

Uniform mixing and completely positive sofic entropy

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2017

Measures and Measure Spaces

Isomorphism of free G-subflows (preliminary draft)

l(y j ) = 0 for all y j (1)

ON THE POSSIBLE QUANTITIES OF FIBONACCI NUMBERS THAT OCCUR IN SOME TYPES OF INTERVALS

Waiting times, recurrence times, ergodicity and quasiperiodic dynamics

On a zero speed sensitive cellular automaton

Approximate Transitivity for Zero Entropy Systems

ERGODIC DYNAMICAL SYSTEMS CONJUGATE TO THEIR COMPOSITION SQUARES. 0. Introduction

Dynamical Systems 2, MA 761

Combinatorics in Banach space theory Lecture 12

FREQUENTLY HYPERCYCLIC OPERATORS WITH IRREGULARLY VISITING ORBITS. S. Grivaux

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x.

Irrationality exponent and rational approximations with prescribed growth

Dept of Math., SCU+USTC

Cobordant differentiable manifolds

Tracial Rokhlin property for actions of amenable group on C*-algebras June 8, / 17

A BOREL SOLUTION TO THE HORN-TARSKI PROBLEM. MSC 2000: 03E05, 03E20, 06A10 Keywords: Chain Conditions, Boolean Algebras.

MAGIC010 Ergodic Theory Lecture Entropy

arxiv: v3 [math.ac] 29 Aug 2018

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Real Variables: Solutions to Homework 3

A genus 2 characterisation of translation surfaces with the lattice property

ON THE DEFINITION OF RELATIVE PRESSURE FOR FACTOR MAPS ON SHIFTS OF FINITE TYPE. 1. Introduction

A REMARK ON THE GEOMETRY OF SPACES OF FUNCTIONS WITH PRIME FREQUENCIES.

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

Weak disjointness of measure preserving dynamical systems

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2018

1 Definition of the Riemann integral

Chapter 2. Real Numbers. 1. Rational Numbers

DIRECTIONAL ERGODICITY AND MIXING FOR ACTIONS OF Z d

7 Complete metric spaces and function spaces

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Gaussian automorphisms whose ergodic self-joinings are Gaussian

arxiv: v3 [math.ds] 23 May 2017

On infinite permutations

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces

Real Analysis Notes. Thomas Goller

A loosely Bernoulli counterexample machine

arxiv:math/ v1 [math.ds] 18 Oct 2004

TYPICAL RECURRENCE FOR THE EHRENFEST WIND-TREE MODEL

Sequence of maximal distance codes in graphs or other metric spaces

µ (X) := inf l(i k ) where X k=1 I k, I k an open interval Notice that is a map from subsets of R to non-negative number together with infinity

MAT1000 ASSIGNMENT 1. a k 3 k. x =

Disjoint Subgraphs in Sparse Graphs 1

RATIO ERGODIC THEOREMS: FROM HOPF TO BIRKHOFF AND KINGMAN

The Lebesgue Integral

Problem Set 2: Solutions Math 201A: Fall 2016

Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr.

S-adic sequences A bridge between dynamics, arithmetic, and geometry

DIOPHANTINE APPROXIMATION AND CONTINUED FRACTIONS IN POWER SERIES FIELDS

Properly forking formulas in Urysohn spaces

Isomorphisms between pattern classes

Transcription:

Entropy dimensions and a class of constructive examples Sébastien Ferenczi Institut de Mathématiques de Luminy CNRS - UMR 6206 Case 907, 63 av. de Luminy F3288 Marseille Cedex 9 (France) and Fédération de Recherche des Unités de Mathématiques de Marseille CNRS - FR 229 ferenczi@iml.univ-mrs.fr Kyewon Koh Park Department of Mathematics Ajou University Suwon 442-729 Korea kkpark@madang.ajou.ac.kr December 9, 2004 Abstract Motivated by the study of actions of Z 2 and more general groups, and their noncocompact subgroup actions, we investigate entropy-type invariants for deterministic systems. In particular, we define a new isomorphism invariant, the entropy dimension, and look at its behaviour on examples. We also look at other natural notions suitable for processes. AMS subject classification: 37A35, 37A5. Keywords: Ergodic theory, entropy, examples. Abbreviated title: Entropy dimensions. The authors were supported in part by a joint CNRS/KOSEF Cooperative Research Grant between Korea and France. The second author was also supported in part by grant BK 2. The second author would like to thank Korea Institute for Advanced Study for the pleasant stay while this work was completed.

Let (X, B, µ, σ, P ) be a process, where σ denotes an action of a group G, and P = {P 0,..., P k } denotes a (finite, measurable) partition of X. In the study of a general group action, subgroup actions play an important role: if a G-action has positive entropy, it is not hard to see that every non-cocompact subgroup action has infinite entropy (see for example [3]). In the case of a Z 2 -action generated by two commuting maps, say T and S, if either h(t ) or h(s) is finite, the entropy of the Z 2 -action is 0. Hence it is increasingly important to study systems of entropy zero, as they may give rise to interesting subgroup actions, and to classify them up to measure-theoretic isomorphism. One way to achieve this goal is to look at the amount of determinism in the system, in a more precise way that is given by the mere knowledge of the entropy. Several refinements of the notion of entropy have been introduced by various authors, such as the slow entropy [5], the measure-theoretic complexity [4], the entropy convergence rates []; following a suggestion of J. Milnor, we propose a new notion, the entropy dimension; though it seems most promising for actions of groups like Z p, for simplicity we develop here the basic definitions and examples in the case of Z-actions. Growth rates and names A first tentative way to define an entropy dimension would be to define D H (P ) = sup{0 α ; lim sup n α H( n 0T g P ) > 0}. This can be generalized to Z k by taking a joint on a suitable part G n instead of the interval [0, n], and letting α vary from 0 to k. However, this does not define an isomorphism invariant, as the following proposition implies that sup P D H (P ) = : Proposition For any given P with D H (P ) < and any δ > 0, there exists P such that P P < δ, and D H ( P ) > D H (P ). Let α 0 = D H (P ). We choose α 0 < α <. We build a Rokhlin stack of height n such that n α 2 L for a very large L. We may ensure that the distribution of the columns on the base level B 0 of the stack is the same as the distribution of n i=0 T i P. We divide each column into 2 nα subcolumns of equal measure and change the partition P into P on the first n α levels from the bottom, so that each subcolumn has a different P -[0, n α )-name. For x and y in B 0, their P -[0, n )-names may agree if their P -[n α, n )-names are the same, so the number of different P -[0, n )-names may be smaller than 2 nα times the number of columns. However, there are at least 2 nα different P -[0, n )-names, each one of measure at most 2 nα. Hence H( n n α i=0 T i P ) 2 nα log ± ɛ n α 2 nα 2 nα i=0 = log 2 ± ɛ

where ɛ comes from the error set. Also n α H( n i=0 T i P ) n α λ 2 nα i=0 λ 2 nα log λ 2 nα = ( λ log λ + n α n α log 2) ± ɛ) = log 2 ± ɛ λ λ where λ denotes the measure of a column and we sum over all columns. We note that P P < 2 L. Let E denote the n α levels where P and P may differ. We repeat this for Rokhlin stacks of height n k where nk α 2 L k for k = 2, 3,.... In the k-th Rokhlin stack, we choose n α k many levels in each column and change P k to P k on these levels so that there are at least 2 nα k many different names for each column. we choose these levels so that their union E k is disjoint from k i= E i. Thus P k P k < 2 L k, and we can define P = lim P k. And we have D H ( P ) α. Note also that P P < 2 L+, thus P can be chosen arbitrarily close to P. And since each P k is measurable with respect to the σ-algebra generated by P, so is P ; if P generates a factor σ-algebra, we can modify it further so that it generates the whole σ-algebra. Remark It is possible to define D using lower instead of upper limits. Note that if α = the construction of P is not possible. For a point x in X, the P -name of x is the sequence P (x) where P i (x) = l whenever σ i (x) is in P l ; we denote by P [0,n) (x) the sequence P 0 (x)... P n (x). Between P [0,n) (x) and P [0,n) (y), there is the natural Hamming distance, counting the ration of different coordinates in the names: for two sequences a = (a,...a k ) and b = (b,...b k ) over a finite alphabet, we recall that d(a, b) = k #{i; a i b i }. We can define a complexity dimension for a process by D 0 (P ) = sup{0 α ; lim sup log #{different P [0, n) names} > 0}, nα D 0 (P ) = sup{0 α ; lim inf log #{different P [0, n) names} > 0}. nα However it is easy to see, as in the previous case, that this is not an isomorphism invariant. Hence, instead of counting names, we should use the number of d-balls around names. 2 Entropy dimensions and subgroup actions Definition 2 For a point x X, we define ± ɛ B(x, n, ɛ) = {y X; d(p [0,n) (x), P [0,n) (y)) < ɛ}. And let K(n, ɛ) be the smallest number K such that there exists a subset of X of measure at least ɛ covered by at most K balls B(x, n, ɛ). Then D(P, ɛ) = sup{0 α ; lim sup log K(n, ɛ) > 0}, nα 2

Similarly D(P ) = lim ɛ 0 D(P, ɛ), D = sup d(p ). P D(P, ɛ) = sup{0 α ; lim inf log K(n, ɛ) > 0}, nα D(P ) = lim D(P, ɛ), ɛ 0 D = sup d(p ). P We call D, resp. D, the upper, resp. lower, entropy dimension of the system (X, B, µ, σ). If D = D, we just call it the entropy dimension and denote it by D. Note that for a Z-action, the entropy dimension may be while the entropy is 0. It is a straightforward consequence of our definition, proved by the same proof as Corollary in [4], that D(P ) = D when P is a generating partition. We want to investigate the relation between the entropy dimension and the entropy of subgroup actions, particularly in the case of Z 2 : if one of the directions has positive entropy, then K(n, ɛ) grows at least at the rate of e cn and the lower entropy dimension is at least one. Hence, if D <, then h(v) = 0 for every direction v, and, moreover, the cone entropy [2] has the property that h c (v) = h(v) = 0. The converse is not true: Katok and Thouvenot [5] provide an example where the upper entropy dimension is arbitrarily close to 2 while the directional entropy is 0 for almost all directions; note that in this example the upper and lower entropy dimensisons do not agree. We recall that there exists an example in [7] where h(σ (,0) ) > 0 while all the remaining directional entropies (including the irrational directions) are 0; this Z 2 -action has clearly entropy dimension equal to. In the well-known ewample of Ledrappier ([6]), the entropy dimension is and every directional entropy is positive. By making a direct product of countably many copies of that example, we can build a Z 2 -action whose entropy dimension is and every direction has infinite entropy, because of the following lemma, which holds also for countable products: Lemma 3 D(σ τ) = max(d(σ), D(τ)). The entropy dimension of σ τ may be computed by taking only partitions of the form P Q. But then for these partitions B((x, y), n, 2ɛ) contains B(x, n, ɛ) B(y, n, ɛ) (respectively for P and Q) and is included in B(x, n, 2ɛ) B(y, n, 2ɛ), which yields the result. 3

3 Examples of entropy dimensions We define inductively a family of blocks B n,i, i b n, in the following way; given two sequences of positive integers e n and r n : b 0 = k, B 0,i = i, i k, b n+ = (b n ) en, the B n,i, i b n+ are all the possible concatenations of e n blocks B n,i, for each i b n+, B n+,i is a concatenation of r n blocks B n,i. Let h n be the length of the B n,i, h n be the length of the B n,i. We can thus define a topological system as the shift on the set of sequences {x n, n ZZ} such that for evry s < t there exists n and i such that x s... x t is a subword of B n,i. We put an invariant measure on it by giving to each block B n,i the measure b n. We denote by P the natural partition in k sets given by the zero coordinate. The above construction is well known to ergodic theory specialists, and a generalization of it to Z 2 -actions is used in [5]; however, even its one-dimensional version can yield new types of counter-examples. This system will be referred in the sequel as the standard example. Proposition 4 There is a choice of parameters such that the standard example satisfies D =, D = 0. Lower limit. For any ɛ, K(h n+, ɛ) is smaller than the total number of P [0, h n+ )-names. The possible names of length h n+ are all the W n+ (a, i, j) where, for 0 a h n+, i b n+, j b n+, W n+ (a, i, j) is the suffix of length a of B n+,i followed by the prefix of length h n+ a of B n+,j. Hence their numbers is at most h n+ b 2 n+, with b n+ as above and h n+ = e 0... e n r 0... r n. If, e 0,...,e n, r 0,..., r n being fixed, we choose r n large enough, we shall have log K(h n+, ɛ) (h n+ ) δn for any given sequence δ n. Upper limit Let L n(ɛ) be the number of ɛ- d-balls than can be made with blocks B n. Note that, on an alphabet of k letters, ( for a) given word w of length m, the number of words w with m d(w, w ) < ɛ is at most k mɛ mɛ k mg(ɛ) for some g(ɛ) 0 when ɛ 0. In the above construction, the number of different blocks B n is b n+ = k e 0...e n. As in every of these blocks the repetitions occur exactly at the same places, for a given word B n,i, the number of words ( ) B n,j with d(b n,i, B n,j) e0... e < ɛ is at most n k e 0... e n ɛ e 0...e nɛ. Hence L n(ɛ) k e 0...e n g(ɛ). 4

As all different blocks are given the same measure, we have K(h n, ɛ) ( ɛ)l n (ɛ). As h n = e 0... e n r 0... r n, if, e 0,...,e n, r 0,..., r n being fixed, we choose e n large enough, we shall have log K(h n, ɛ) (h n) δn for any given sequence δ n. Proposition 5 For any 0 < α <, there is a choice of parameters such that the standard example satisfies D = α. We make the proof for α =. We define a sequence l 2 n by choosing a very large l, then l n = [l 2(n ) n ][l 2(n ) n ]l n ; then, in the standard construction, starting from the two 0-blocks 2(n ) n 0 and, we put e 0 = r 0 = [ l ], and, for n, e n = r n = [l ]. [x] denotes the integer part of x, but in the following computations we shall assimilate a large enough x with its integer part. Then, the lower limit is reached along the sequence {h n+ = l n } and the upper limit along the sequence h n = {l + 2(n ) n }. Lower limit As in the second part of the proof of the last proposition, K(h n+, ɛ is at least ( ɛ) times the number L n+ (ɛ) of ɛ- d-balls than can be made with blocks B n+. Because of the repetitions, and the computation in the last proposition So we have only to compute Hence D 2. lim L n+ (ɛ) L n(ɛ) 2 e 0...e n g(ɛ). lim ln log(2 e 0...e n ) = log 2 l l l ln lim 2 2 l 3 3 l... n n = ln l n log 2 = log 2. Upper limit As in the first part of the proof of the last proposition, K(h n, ɛ) is smaller than the total number of P [0, h n)-names, and this is at most b 2 n+h n. We take some b > 2 ; lim sup 2 lim sup (h n) b log b2 n+h n = (h n) b log b n+ = 5

2 lim sup log 2 (l + 2(n ) n ) b 2 lim sup Hence D, which gives what we claimed. 2 l l l 2 2 l 3 3 l... n n = l + 2(n ) n = 0. (l + 2(n ) n ) b The general case (for a given α) follows with the same proof, by taking l n = l n [l α n e n = [l α n ], r n = [l α n ]. n ][l α n The above examples can be generalized to Z 2 -actions; by alternating repetitions and independent stacking, we can build an example whose entropy dimension is any given 0 α 2. In [4], where the rate of growth of K(n, ɛ) is used to define the so-called measure-theoretic complexity, it is asked whether this growth rate can be unbounded but smalller than O(n) (its topological version for symbolic systems, the symbolic complexity has to be bounded if it is smaller than n). Our class of examples allows to answer this question; note that the proofs are slightly more involved as we are dealing with sub-exponnetial growths: Proposition 6 For any given function φ growing to infinity with n, there is a choice of parameters such that the standard example satisfies, for every fixed ɛ small enough, K(n, ɛ) + n ], with n, but for all n. K(n, ɛ) φ(n) Upper bounds We give upper bounds for K k, where K (n, ɛ) is the smallest number of ɛ- d-balls of names of length n necessary to cover a proportion of the space of measure. We look at K at the end of its times of maximal growth, namely K (h n, ɛ). The possible words of length h n are all the W n(a, i, j) where, for 0 a h n, i b n+, j b n+, W n(a, i, j) is the suffix of length a of B n,i followed by the prefix of length h n a of B n,j. Each one of these words is at a ( d) distance at most ɛ of some W n(a s, i s, j s ) for s K (h n, ɛ). We look now at words of length h n+ ; they are all the W n (a, i, j) where, for 0 a h n+, i b n+, j b n+, W n (a, i, j) is the suffix of length a of B n+,i followed by the prefix of length hn + a of B n+,j. Hence for 0 t r n, 0 a h n+, i b n+, j b n+, W n (a + th n, i, j) = W (a, i, i) t W (a, i, j)w (a, j, j) rn t. 6

Each one of these will be at a distance at most ɛ+ r n of some W (a s, i s, i s ) t+ W (a s, j s, j s ) rn t ; and, for fixed s, W (a s, i s, i s ) t+ W (a s, j s, j s ) rn t and W (a s, i s, i s ) t + W (a s, j s, j s ) rn t are at a distance at most t t r n. Hence, for a given sequence v n, we have K (h n+, ɛ( + r n + v n )) K (h n, ɛ) 2 ɛv n. Then, during the stage of independent stacking, a straightforward computation gives that K (h n+, ɛ) K (h n+, ɛ) en. If we fix the sequence e n, and suppose < + ; we choose any sequence v n such that rn vn < + ; then, if we choose r n large enough in terms of K (h n, ɛ), h n, and e n, we get that K (h n+, 2ɛ) is smaller than φ(h n+), and this is true a fortiori for other values. Lower bounds We shall show that K(n, ɛ) + with n. For this, let L n (ɛ) be the number of ɛ- d-balls than can be made with blocks B n, and L n(ɛ) be the number of ɛ- d-balls that can be made with blocks B n. During the repetition stage, we have L n (ɛ) L n (ɛ). Then, during the independent stage, we start from L = L n (ɛ) blocks which are ɛ- d-separated; we call them B n,s,..., B n,sl. Then, if e n is a multiple of L, the 2L blocks Bn,s en i, i L, and B en L n,si B en L n,si+...b en L n,sl B en L n,s...b en L n,si, i L, are ɛ( )- d-separated. L Thus whenever e n is large compared to L n (ɛ) we have L n+ (ɛ( L n (ɛ) )) 2L n(ɛ) and hence L n ( ɛ ) tends to infinity with n; and, because of the structure of the names and 2 the fact that each block B n,i has the same measure for fixed n, we get that K(h n, ɛ) tends to infinity with n. Remarks To make our examples weakly mixing, it is enough to place a spacer between two consecutive blocks at each repetition stage. It is easy to see that all our examples satisfy a form of Shannon-McMillan-Breiman theorem (indeed, all atoms have the same measure); in a forthcoming paper, we shall give examples which do not satisfy it. References [] F. BLUME: Possible rates of entropy convergence, Ergodic Theory Dynam. Systems 7 (997), no., 45 70. 7

[2] R. BURTON, K. K. PARK: Spatial determinism for a Z 2 -action, preprint. [3] J.-P. CONZE: Entropie d un groupe abélien de transformations, (in French), Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 25 (972/73), 30. [4] S. FERENCZI: Measure-theoretic complexity of ergodic systems, Isra el J. Math. 00 (997), 89-207. [5] A. KATOK, J.-P. THOUVENOT: Slow entropy type invariants and smooth realization of commuting measure-preserving transformations, Ann. Inst. H. Poincaré Probab. Statist. 33 (997), no. 3, 323 338. [6] F. LEDRAPPIER: Un champ markovien peut être d entropie nulle et mélangeant, (in French), C. R. Acad. Sci. Paris Sr. A-B 287 (978), no. 7, A56 A563. [7] K. K. PARK: On directional entropy functions, Isra el J. Math. 3 (999), 243 267. 8