ULAM S METHOD FOR SOME NON-UNIFORMLY EXPANDING MAPS

Similar documents
A FAMILY OF PIECEWISE EXPANDING MAPS HAVING SINGULAR MEASURE AS A LIMIT OF ACIM S

A Family of Piecewise Expanding Maps having Singular Measure as a limit of ACIM s

An adaptive subdivision technique for the approximation of attractors and invariant measures. Part II: Proof of convergence

Rigorous approximation of invariant measures for IFS Joint work April 8, with 2016 S. 1 Galat / 21

Rigorous pointwise approximations for invariant densities of non-uniformly expanding maps

THE POINT SPECTRUM OF FROBENIUS-PERRON AND KOOPMAN OPERATORS

Eigenfunctions for smooth expanding circle maps

Physical Measures. Stefano Luzzatto Abdus Salam International Centre for Theoretical Physics Trieste, Italy.

Position-dependent random maps in one and higher dimensions

If Λ = M, then we call the system an almost Anosov diffeomorphism.

Estimating Invariant Measures and Lyapunov Exponents

DETERMINISTIC REPRESENTATION FOR POSITION DEPENDENT RANDOM MAPS

STRONGER LASOTA-YORKE INEQUALITY FOR ONE-DIMENSIONAL PIECEWISE EXPANDING TRANSFORMATIONS

Periodic Sinks and Observable Chaos

Waiting times, recurrence times, ergodicity and quasiperiodic dynamics

Smooth Livšic regularity for piecewise expanding maps

An Introduction to Ergodic Theory

Entropy for zero-temperature limits of Gibbs-equilibrium states for countable-alphabet subshifts of finite type

THEOREMS, ETC., FOR MATH 515

Riesz Representation Theorems

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Probability and Measure

Chapter 8. General Countably Additive Set Functions. 8.1 Hahn Decomposition Theorem

Math 5051 Measure Theory and Functional Analysis I Homework Assignment 3

Rigorous estimation of the speed of convergence to equilibrium.

A VERY BRIEF REVIEW OF MEASURE THEORY

Functional Norms for Generalized Bakers Transformations.

arxiv: v2 [math.ds] 14 Apr 2011

Invariant densities for piecewise linear, piecewise increasing maps

Invariant measures for iterated function systems

Ergodic Theory and Topological Groups

One dimensional Maps

+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1

4. Ergodicity and mixing

Invariant densities for piecewise linear maps

Properties for systems with weak invariant manifolds

Entropy production for a class of inverse SRB measures

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

Invariant measures and the compactness of the domain

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

On the smoothness of the conjugacy between circle maps with a break

Lecture 4 Lebesgue spaces and inequalities

Measure and Integration: Solutions of CW2

POLYNOMIAL LOSS OF MEMORY FOR MAPS OF THE INTERVAL WITH A NEUTRAL FIXED POINT. Romain Aimino. Huyi Hu. Matt Nicol and Andrew Török.

Lebesgue Integration on R n

MARKOV PARTITIONS FOR HYPERBOLIC SETS

LINEAR CHAOS? Nathan S. Feldman

Overview of normed linear spaces

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

MATHS 730 FC Lecture Notes March 5, Introduction

Disintegration into conditional measures: Rokhlin s theorem

arxiv: v1 [math.ds] 31 Jul 2018

STAT 7032 Probability Spring Wlodek Bryc

CONVERGENCE IN DISTRIBUTION OF THE PERIODOGRAM OF CHAOTIC PROCESSES

Examples of Dual Spaces from Measure Theory

Lebesgue Measure on R n

Lyapunov optimizing measures for C 1 expanding maps of the circle

1.4 Outer measures 10 CHAPTER 1. MEASURE

L p Spaces and Convexity

Lecture 4. Entropy and Markov Chains

1 Continuity Classes C m (Ω)

THE SET OF RECURRENT POINTS OF A CONTINUOUS SELF-MAP ON AN INTERVAL AND STRONG CHAOS

Solution of the Inverse Frobenius Perron Problem for Semi Markov Chaotic Maps via Recursive Markov State Disaggregation

arxiv:chao-dyn/ v1 18 Dec 1997

Notes on Measure Theory and Markov Processes

MATH5011 Real Analysis I. Exercise 1 Suggested Solution

Coexistence of Zero and Nonzero Lyapunov Exponents

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

Integration on Measure Spaces

1 Math 241A-B Homework Problem List for F2015 and W2016

IRRATIONAL ROTATION OF THE CIRCLE AND THE BINARY ODOMETER ARE FINITARILY ORBIT EQUIVALENT

A Nonlinear Transfer Operator Theorem

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

An Introduction to Entropy and Subshifts of. Finite Type

3 Integration and Expectation

Measurable functions are approximately nice, even if look terrible.

ADDING MACHINES, ENDPOINTS, AND INVERSE LIMIT SPACES. 1. Introduction

consists of two disjoint copies of X n, each scaled down by 1,

ABSOLUTE CONTINUITY OF FOLIATIONS

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

I. ANALYSIS; PROBABILITY

2 Measure Theory. 2.1 Measures

Limit theorems for random dynamical systems using the spectral method

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Local time of self-affine sets of Brownian motion type and the jigsaw puzzle problem

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

PHY411 Lecture notes Part 5

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

HYPERBOLIC SETS WITH NONEMPTY INTERIOR

Real Analysis Problems

THE POINCARÉ RECURRENCE PROBLEM OF INVISCID INCOMPRESSIBLE FLUIDS

Takens embedding theorem for infinite-dimensional dynamical systems

MATH 202B - Problem Set 5

FOR PISOT NUMBERS β. 1. Introduction This paper concerns the set(s) Λ = Λ(β,D) of real numbers with representations x = dim H (Λ) =,

Entropy and Ergodic Theory Lecture 15: A first look at concentration

AMENABLE ACTIONS AND ALMOST INVARIANT SETS

Lecture 4: Numerical solution of ordinary differential equations

Transcription:

ULAM S METHOD FOR SOME NON-UNIFORMLY EXPANDING MAPS RUA MURRAY Abstract. Certain dynamical systems on the interval with indifferent fixed points admit invariant probability measures which are absolutely continuous with respect to Lebesgue measure. These maps are often used as a model of intermittent dynamics, and they sub-exponential decay of correlations (due to the absence of a spectral gap in the underlying transfer operator). This paper concerns a class of these maps which are expanding (with convex branches), but admit an indifferent fixed point with tangency of O(x 1+α ) at x = 0 (0 < α < 1). The main results show that invariant probability measures can be rigorously approximated by a finite calculation. More precisely: Ulam s method (a sequence of computable finite rank approximations to the transfer operator) exhibits L 1 convergence; and the nth approximate invariant density is accurate to at least O(n (1 α)2 ). Explicitly given non-uniform Ulam methods can improve this rate to O(n (1 α) ). 1. Introduction It is well known that expanding maps with indifferent fixed points (or periodic orbits) with local tangencies of O(x 1+α ) (0 < α < 1) can admit absolutely continuous invariant probability measures (ACIPMs); see [21, 16, 4, 14, 5, 18] (and the references therein). These maps were originally considered in the study of intermittency in turbulent flows [19], and are interesting because they exhibit polynomial, rather than exponential, decay of correlations [21, 16, 4, 14, 5, 6, 18] for suitably regular functions. Sub-exponential correlation decay is intimately connected with the absence of a spectral gap in the corresponding transfer operators (Frobenius Perron (FP) operators), and is in sharp contrast to the situation for uniformly expanding maps [10, 1]. Indeed, there has been an explosion of interest in these maps in recent years, precisely because they are a good testing ground for ideas in non-uniformly hyperbolic dynamics. The aim of the present work is to prove that the densities of the ACIPMs associated to a class of such maps can be accessed at essentially arbitrary precision by a finite numerical computation. Let P be the FP operator corresponding to a given map T. When the map is mixing and uniformly expanding, P exhibits a spectral gap wherein the eigenvalue 1 (whose eigenvector is the density of the ACIPM) is separated in modulus from the rest of the spectrum. One consequence is that the densities of the ACIPMs are highly amenable to Date: November 14, 2006. 2000 Mathematics Subject Classification. Primary 37M25 Secondary 28D05. Key words and phrases. indifferent fixed point invariant measure approximation non-uniformly expanding dynamical system mixing time polynomial decay of correlations Ulam s method. 1

2 RUA MURRAY numerical approximation by projection methods, like Ulam s method [20]. Indeed, the spectral gap for P (and eigenvector at 1) persists under the small perturbations induced by suitable approximation schemes [15, 9, 8, 3, 17]. Without a spectral gap the convergence of invariant measure approximations is more delicate [14]. The general shape of the ACIPMs of maps with indifferent fixed points is well understood [21], while the exact details are not readily revealed by analytical techniques. Liverani et. al. [16] made considerable progress by introducing a perturbation: first regularize the densities by averaging over ɛ neighbourhoods, and then apply a suitably large power of P (the power increases as ɛ decreases). In that approach, a spectral gap appears, and the perturbed operator is close enough to a power of P that almost optimal rates of correlation decay can be extracted. However, as the basis of a computational method for accessing the ACIPM such a method has a drawback: for increasing accuracy of approximation one must apply a high iterate of P exactly, which may be impractical in finite-precision computer arithmetic. By contrast, Ulam s method [20] alternately averages over a finite collection of ɛ neighbourhoods and applies a single iterate of P. This results in a finite-rank approximation to P whose matrix representation has a simple formula. We prove that the approximate invariant densities from Ulam s method converge to the (unique) T invariant density as the rank of the approximation increases (Theorem 2). Then, with a standard condition on T, the bound on the approximation rate is established (Theorem 3). Essentially the same proof as in Theorem 3 yields Theorem 4: a much improved approximation rate for certain non-uniform Ulam methods. The proof of Theorem 2 is reminiscent of the constructions in [16], and of Li s original proof [13] of convergence for Ulam s method for uniformly expanding maps of Lasota Yorke type [11]. In the latter setting, the FP operator preserves a cone of non-negative BV functions in L 1, and convergence follows from the observation that the Ulam type projections of P also preserve that cone. The method of the current paper relies on the invariance of certain relatively compact (cone-like) subsets of L 1 under the action of both P, and its Ulam approximations. Similar cones were introduced in [16], and allow for power law singularities near the indifferent fixed point. The cones impose enough regularity to establish uniform bounds on the Ulam approximations, and Theorem 2 follows easily. The bound on the rate of approximation in Theorem 3 uses a combination of the methods in [15, 16, 7], and a carefully chosen approximation (motivated by Young towers [21]). Remarks about the tower approach are given after the statement of Theorem 4, and in Section 3. Although the dynamical systems we consider have well-known polynomial rates of mixing [21, 4], regularity issues mean that the quantitative estimates in the proofs of Theorems 3 and 4 are determined by the time needed for mixing to get started the mixing times rather than the asymptotic rate of correlation decay. The optimality of the rates in Theorems 3 and 4 is not known, and is the subject of further investigation. In the final section, comparison is given with some recent and careful numerical calculations of Lin [14]. To summarize: the results of this paper show that the invariant densities of a commonly studied class of non-uniformly expanding maps can be accessed rigorously by finite numerical calculations, with bounds on the rate of approximation. Previous results of this kind

ULAM S METHOD WITH INDIFFERENT REPELLERS 3 have required uniform expansion in the dynamics [13, 3, 7]. The rate of approximation can be improved considerably with a non-uniform Ulam method. Class of maps. Let 0 < α < 1, and let T α be the class of maps T satisfying: T (0) = 0 and for an x 0 (0, 1), T : [0, x 0 ) onto [0, 1), T : [x 0, 1] onto [0, 1]. Each branch of T is increasing, convex, and is C 1 (or, in the case of the first branch, can be extended to a C 1 function on [0, x 0 ]); T (x) > 1 for all x (0, x 0 ) (x 0, 1) and T (0) = 1. The intervals [a N, b N ] such that T N : (a N, b N ) (0, 1) is a C 1 diffeomorphism will be called monotonicity intervals of T N. There is a constant C (0, ) such that (1) T (x) x + C x 1+α. Remark. The convexity condition imposes all the regularity needed for Theorems 1 and 2. The proof of Theorem 3 uses uniform distortion estimates, so a C 2 condition is added below. Example 1. The Pommeau Manneville map [18, 4] T (x) = x (1 + x α ) (mod 1). Example 2. A variant of the Pommeau Manneville map [16, 14] { x (1 + (2 x) T (x) = α ) if x [0, 1/2), 2 x 1 if x [1/2, 1]. Example 3. Let ϕ t (ξ) by the solution of the differential equation x = x 1+α, x(0) = ξ. Pick τ such that ϕ τ (1) = 2 and put T (x) = ϕ τ (x) (mod 1). In this case, one can readily compute τ = 1 2 α α and x 0 = (2 2 α ) 1/α. Since ϕ t (x) = x (1 α x α t) 1/α, it is easy to obtain precise formulas for approach of pre-images of x to the indifferent repeller at 0. Invariant densities. The existence of the ACIPMs for T T α is well-known (see, for example [4, 16, 18, 21]). However, we give a simple existence proof which sets the scene for the the analysis of Ulam s method. Let [0, 1] be equipped with the Borel σ algebra, and denote Lebesgue measure by λ. A Borel measure µ on [0, 1] is absolutely continuous (AC) with respect to λ if µ(a) > 0 λ(a) > 0. A measure µ is an invariant measure if µ = µ T 1. Finite AC invariant measures can be normalized to obtain ACIPMs. By the Radon Nikodym theorem, an ACIPM has an L 1 density function f = dµ, so that µ(a) = f dλ. Since T T dλ A α are expanding, µ T 1 is AC whenever µ is AC, so the invariance condition can be written as f dλ = µ(a) = µ ( T 1 (A) ) = dµ = dµ T 1 dµ T 1 = dλ. dλ A T 1 (A) A A

4 RUA MURRAY The Frobenius Perron operator [10, 1] P : L 1 [0, 1] L 1 [0, 1] is defined by P ( dµ dµ T 1, so a probability measure µ is an ACIPM precisely when P( dµ ) = dµ dλ dλ Markov operator (ie. is linear, monotone and preserves integrals). Moreover, f(y i ) Pf(x) = T (y i ). {y i T (y i )=x} dλ. dλ) = P is a Maps in T α give exactly two pre-images to each x (0, 1); we will adopt the convention that these are y 1 (0, x 0 ) and y 2 (x 0, 1). For each A > 0 define C A = {f L 1 f 0, f decreasing, 1 0 f dλ = 1, x 0 f dλ A x1 α }. The key step in proving the existence of an ACIPM µ with dµ dλ C A is to establish that for large enough A, C A is invariant by the FP operator (Proposition 1.1), and its Ulam approximations. Ulam approximations. Ulam s method [20] consists in replacing the FP operator by a sequence of finite rank discretizations whose fixed points are relatively easy to compute [7, 3, 17]. For each n > 0, let ξ n = {[ i, )} i+1 n 1 be the partition of [0, 1) into uniform n n i=0 subintervals and E n be the projection operator on L 1 which acts by taking expectations: f 1J dλ E n = E(ξ n ) where [E(J )]f = J J λ(j) 1 J for J a partition of [0, 1] into subintervals. The Ulam approximations to P are P n = E n P, and the nth Ulam approximations are probability densities f n satisfying f n = P n f n. This setup will be called a uniform Ulam method. A non-uniform choice of subintervals will lead to a non-uniform Ulam method. Proposition 1.1. Let T T α and let P be the FP operator for T. There is A > 0 such that when A A, (i) P : C A C A ; and (ii) P n : C A C A. Theorem 1. Let T T α and let A be as in Proposition 1.1. T has an ACIPM whose density f C A. The measure µ = f λ is the unique ACIPM, and is equivalent to λ. Theorem 2 (Convergence of Ulam s method). Let T T α. Let f be the density of the unique ACIPM. The finite rank operator P n = E n P has a unique non-negative, normalized fixed point f n and f f n L 1 0 as n. The proofs of Proposition 1.1 and Theorems 1 and 2 are given in Section 2. A is given in equation (3). To establish a rate of convergence, we impose more regularity. Let T α consist of those maps in T α that have a C 2 extension on [x 0, 1] and all intervals [δ, x 0 ], δ > 0. Theorem 3 (Rate of approximation). Let T T α and suppose also that T (x) c x α 1 for some constant c. Then, with the same notation as Theorem 2, f f n L 1 C n (1 α)2 where the constant C is independent of n.

ULAM S METHOD WITH INDIFFERENT REPELLERS 5 Using a sequence of non-uniform partitions, a faster rate of approximation is possible. Theorem 4 (Convergence of non-uniform Ulam method). Let T, f be as in Theorem 3 and let β > 1. For each n, let J 1 α n be the partition {[ ( i n )β, ( i+1)β)} n 1. Each finite rank n i=0 operator [E(J n )] P has a unique non-negative, normalized fixed point g n, and there is a constant C (depending on β and T but not n) such that f g n L 1 C n (1 α). Remarks. (1) The proofs of Theorems 3 and 4 are given in Section 3 and are almost identical. In each case, let f = Pf and g = E(J )Pg. The key steps are to use the regularity of g (and f) and a mixing time estimate to establish that for every small ɛ, f g L 1 O(ɛ α f E(J )f L 1 + ɛ 1 α ). The rate is obtained by choosing ɛ f E(J )f L 1. (2) If β = 1 is used in Theorem 4 then f g 1 α n L 1 = O( log n ). n (1 α) (3) The proofs in Section 3 involve direct estimates of the rate of decay of the perturbations induced by Ulam s method; i.e. how does P k ϕ L 1 decay to zero where ϕ = f f n? An alternative approach is to use decay of correlation estimates from [21]. A suitable Young tower can be built over 0 = [x 0, 1], with the levels of the tower determined by the first return time to 0 (under application of T ). This construction reveals existence, uniqueness (and exactness) of the invariant density f for T, as well as polynomial speed of convergence to equilibrium for Hölder continuous ϕ: P k ϕ f ϕ dλ L 1 = O(k 1 1/α ). These bounds do not apply directly to the error analysis of Ulam s method since the relevant ϕ are not Hölder. However, arguments similar to those in Section 3 can be used to write P k ϕ = ϕ 1 + ϕ 2 where k is a suitable mixing time, ϕ 1 is small, and ϕ 2 has enough regularity (when embedded in the tower) to exhibit the polynomial convergence to equilibrium proved in [21]. Proving Theorems 3 and 4 via this route requires some extra technicalities, and appears to be no more efficient than the self-contained approach in Section 3. 2. Invariance of C A and uniqueness of the ACIPM Recall that x 0 is the boundary point of the two monotonicity intervals of a given T T α. Lemma 2.1. Let A > 0 be fixed, let f C A, and T T α. T 1 (x) = {y 1, y 2 } (where y 1 < x 0 < y 2 ). Then (i) f(x) A x α ; (ii) f(x) 1 x, and in particular f [x 0,y 2 ] 1 x 0 ; (iii) y 1 x 0 x; (iv) x 1 α y 1 1 α (1 α) C x 0 1+α x where C is the constant in (1). Let x (0, 1] and let Proof. (i)&(ii) Since f is decreasing, x f(x) x f dλ min{a 0 x1 α, 1}. (iii) Write y 1 = ρ x 0, so ρ [0, 1]. Let T be the continuous extension of T to [0, x 0 ]. Since T is

6 RUA MURRAY convex, T (0) = 0 and T (x 0 ) = 1, x = T (y 1 ) = T (ρ x 0 ) ρ T (x 0 ) = ρ 1 = 1 x 0 y 1. (iv) First, write x 1 α y 1 α 1 = x (1 1 α ( ) ) 1 x y 1 1 α so that x x 1 α y 1 1 α x 1 α (1 α) x y 1 x = x α (1 α) (T (y 1 ) y 1 ). The bound now follows from (1) and part (iii). From Lemma 2.1 (i) it is immediate that when f C A and δ > 0, (2) var [δ,1] f f(δ) f(1) A δ α where var [a,b] g denotes the variation of g : [a, b] R. We also put (3) A = ((1 α) C x 0 2+α ) 1. Proof of Proposition 1.1 (i). Let A A and let f C A. Since P is a Markov operator, Pf 0 and Pf dλ = f dλ = 1. We need only prove that Pf is decreasing; and x 0 Pf dλ Ax1 α. In the notation established above, (4) Pf(x) = f(y 1) T (y 1 ) + f(y 2) T (y 2 ). Now, note that both branches of T are increasing, and by convexity, 1/T is decreasing. Thus, since f is decreasing, so too is Pf. To establish the main inequality, let µ = fλ. Then µ[0, z) A z 1 α for z [0, 1] and x Pf dλ = µ T 1 [0, x) = µ[0, y 0 1 ) + µ[x 0, y 2 ) A y 1 α 1 + λ[x 0,y 2 ) x 0, where the last inequality follows from Lemma 2.1 (ii). However, since T is increasing, λ[0, x) = λ T [x 0, y 2 ) λ[x 0, y 2 ) so λ[x 0,y 2 ) x 0 x x 0 A (x 1 α y 1 1 α ), by Lemma 2.1 (iv). Thus, PC A C A when A A. Proof of Proposition 1.1 (ii). Since P n = E n P it suffices to prove that E n C A C A. To this end, let f C A and note that for x i = i, n xi E 0 n f dλ = x i f dλ A x 1 α 0 i. If x i 1 < x < x i then write x = ρ x i 1 + (1 ρ) x i for ρ (0, 1). Since E n f is constant on the interval [x i 1, x i ), x E 0 nf dλ = ρ x i 1 f dλ + (1 ρ) x i f dλ A (ρ x 1 α 0 0 i 1 + (1 ρ) x 1 α i ). Since z z 1 α is concave, the RHS is bounded above by A (ρ x i 1 +(1 ρ) x i ) 1 α, proving that E n f C A.

We now collect several lemmas. ULAM S METHOD WITH INDIFFERENT REPELLERS 7 Lemma 2.2. For each A > 0, C A is convex and norm compact as a subset of L 1 [0, 1]. Proof. Convexity is immediate. Relative compactness can be deduced from Theorem IV.8.20 of [2]. Alternatively, let δ k 0 and let H k = {f 1 [δk,1] f C A }. By (2), each H k is relatively compact by Helly s Theorem. Moreover, each f C A can be written as f = f 1 [δk,1] + g k where g k L 1 0 uniformly in k. Let {f n } be any sequence in C A. For each n, k put h k n = f n 1 [δk,1]. Let {h 1 n 1 j } j=1 be a Cauchy subsequence in H 1. For each k > 1, given {h k 1 } n j=1, choose a Cauchy subsequence {h k } k 1 n k j=1 of {h k } j j n k 1 j=1 in H k. j Then {f n j} j=1 is a Cauchy subsequence of {f n }, and relative compactness of C A follows. j The lemma follows since each C A is closed. Lemma 2.3. Let f C A. If J is a partition of [0, 1] into subintervals such that [0, ɛ 0 ) J then E(J )f f L 1 3 A max J J λ(j) ɛ 0 α. In particular, E n f f L 1 3 A n α 1. Proof. Let ɛ = max J J λ(j). Since 0 f(x) A x α we have ɛ0 E(J )f f dλ ɛ 0 E(J )f dλ + ɛ 0 f dλ = 2 ɛ 0 f dλ 2 A ɛ 0 0 0 0 0 1 α 2 A ɛɛ α 0. We also have, 1 ɛ 0 E(J )f f dλ ɛ var [ɛ0,1)f A ɛɛ 0 α (the first inequality is a standard property of variation under E(J ), and the second uses (2)). Lemma 2.4. Suppose that for some Ã, f dλ > 0, f R f dλ CÃ. Then there are c 0 > 0, K Z + such that P k f c 0 f dλ for all k K, (c0, K depend only on max{ã, A } where A is given by (3)). In particular, if also f = Pf then µ = f λ is equivalent to λ. Proof. Without loss of generality assume that à A P so that R k f C P k f dλ à for all k 0. Fix δ such that à δ1 α = 1. Then, since R δ f dλ f dλ, we have R 1 f dλ f dλ. 2 R 0 2 δ 2 1 δ Since f is decreasing, f1 (0,δ) f(δ) f dλ R f dλ. By (1), the sequence x 1 δ 2 (1 δ) n = T 1 (x n 1 ) [0, x 0 ) is strictly decreasing, and converges to 0. Thus, there is a K such that x K < δ. Then P K f(x) = T K (y i )=x f(y i ) (T K ) (y i ) f(x K ) (T K ) (x K ) + f(x K ) (T K ) (x K ) T K (y i )=x y i >x K f(y i ) (T K ) (y i ) f dλ 2 (1 δ) (T K ) (x K ) (we have used the fact that [0, x K ) is the first monotonicity interval of T K, and 1/(T K ) 1 has a decreasing continuous extension to [0, x K ]). Then c 0 = 2 (1 δ) (T K ) (x K depends only ) on Ã. For k > K, apply the same argument to Pk K f. For the last part, suppose

8 RUA MURRAY that f = Pf and f CÃ. Clearly, µ is AC with respect to λ. The other direction for equivalence is almost as obvious since f = P K f implies µ c 0 λ. Lemma 2.5. If 0 f = Pf and E = T 1 E λ a.e. then f1 E is equal λ a.e to a decreasing function, and is a fixed point of P. Proof. First of all, fix notation: if N > 0 denote the monotonicity intervals of T N as {Bi N } (indexed by i) and the corresponding inverse branches of T N as T N i (so that : (0, 1) Bi N ). Write f E = f 1 E. Consider the non-negative simple functions T N i N = {f = i a i 1 B N i B N i is a monotonicity interval of T N, a i 0}. By an argument similar to [12], N=1 N is dense in (L 1 ) +. Let f D N. Then, a P N i 1 B N f D (x) = i (x j ) (T N ) (x j ) = a i (T N ) (T N i i (x)) {T (x i )=x} {T N (x j )=x} i since each 1 B N i (x j ) = 1 if i = j and 0 otherwise. Since the branches of T N are convex and increasing, P N f D is a decreasing function. Next, observe that 1 E = 1 T 1 E = 1 E T (λ a.e.) so Pf E (x) = f(x i )1 E (x i ) = f(x i )1 E (T (x i )) = [Pf(x)] 1 T (x i ) T E (x) = f E (x). (x i ) We also have P N f E = f E and hence {T (x i )=x} f E P N f D L 1 = P N f E P N f D L 1 = P N (f E f D ) L 1 (f E f D ) L 1. Thus, f E is an L 1 density point of decreasing functions, so is equal almost everywhere to a decreasing function. Proof of Theorem 1. Since C A is compact and convex, P has a fixed point f C A by Proposition 1.1(i) and the Markov Kakutani fixed point theorem [2, V.10.6]. Let µ = f λ. By Lemma 2.4 (with à = A ), µ is equivalent to λ so that the uniqueness of µ among ACIPMs will follow by establishing that µ is ergodic. Suppose that E is a measurable set such that E = T 1 (E) µ a.e. and µ(e) > 0. Since µ is equivalent to λ, E = T 1 (E) λ a.e. and λ(e) > 0. Now put f E = f 1 E. Then f E L 1 = µ(e) > 0. By Lemma 2.5, f E = Pf E and f E is decreasing. We also have x f 0 E dλ = x f 0 1 E dλ x f 0 dλ A x 1 α f so that E C A µ(e) à where à =. By Lemma 2.4, f µ(e) E λ is equivalent to λ which in turn is equivalent to µ. Thus µ([0, 1] \ E) = 0; that is, µ is ergodic. Proof of Theorem 2. Let A A. As in the proof of Theorem 1, P n has a fixed point f n C A. The fixed point is unique (f λ) a.e (and hence λ a.e.) because (T, fλ) is ergodic. By Lemma 2.2, every subsequence of {f n } contains an L 1 convergent subsequence. Let f ni L 1 f C A. Then (since f ni = E ni Pf ni ), f Pf L 1 f f ni L 1 + E ni Pf ni Pf ni L 1 + P(f ni f) L 1.

ULAM S METHOD WITH INDIFFERENT REPELLERS 9 The first and third terms on the right converge to 0 as i by the choice of subsequence, and the second term converges to 0 by Lemma 2.3. Thus f = Pf. Since T admits a unique ACIPM, f is its density, and all subsequences of {f n } have f as their common limit point. L Thus f 1 n f. 3. Rate of Convergence Let f = Pf and f n = P n f n be the normalized invariant density and nth Ulam approximation (respectively). For each k, f f n = P k f P n k f n = ( P k P n k ) f n + P k (f f n ). But ( P k P n k ) f n = k m=1 Pm 1 (Id E n )PP n k m f n. When A A, f n C A, Lemma 2.3 applies to Pf n and (5) f f n L 1 k 3 A n (1 α) + P k (f f n ) L 1. Unfortunately, the analysis of (5) is complicated by bad local regularity of (f f n ) (f n is a piecewise constant function on intervals of length 1 ), so standard mixing estimates [21, n 16, 4] can not be applied directly. The interval [0, 1 ) ξ n n illustrates the problem since a mixing time of k = O(n α ) is needed before T k [0, 1 ) is a large enough interval that n mixing can begin 1 ; comparison with (5) reveals that such a choice would not give a useful error bound for values of α > 1. To get around this problem, we replace (f f 2 n) with its expectation with respect to a carefully chosen partition J. The partition J will have subintervals J which are of O(ɛ) size, and are such that P k 1 J γ λ(j) where γ is some fixed positive constant and k = k(ɛ) is not so big as to overwhelm the one-step approximation errors of O(n α 1 ) in (5). Then, at least a proportion γ of the mass in E(J )(f f n ) is mixed away after k iterates. Averaging (f f n ) in this way introduces an additional term of at most O(ɛ 1 α ) on the RHS of (5), yielding an overall error bound of O(k n α 1 + ɛ 1 α ) (see (7) below). This can be accomplished for k = O(ɛ α ), so that an overall error bound on the nth step of Ulam s method arises from a suitable choice of ɛ = ɛ(n). Proposition 3.1. Let T T α. Then there is a constant γ > 0 such that for small enough ɛ > 0, there is a partition J = J (ɛ) with the properties that [0, ɛ 0 ) J with ɛ 0 2 ɛ and for all J J, λ(j) 3 ɛ and P k 1 J γλ(j) when k 2 k ɛ, k ɛ = min{k : T k (ɛ) x 0 } = O(ɛ α ). Proof. J will be a fine Markov partition such that most J J map exactly over (x 0, 1] in significantly fewer than k ɛ applications of T. A couple of facts about stopping times and some preliminary distortion estimates are needed. Let τ 1 (x) = min{j > 0 T j (x) (x 0, 1]} for the λ a.e. x for which this stopping time is defined. Let T 1 : (0, 1] (x 0, 1] be defined by T 1 (x) = T τ 1(x) (x). Since T (x) c x α 1, standard techniques (see [21]) can be used to establish: 1 Direct control of the speed of convergence to equilibrium of P k 1 A can be got by using a Young tower built over the interval [x 0, 1] once supp(p k 1 A ) has grown to size of the order of the base of the tower.

10 RUA MURRAY (Backwards approaches to 0.) For each m > 0 let x m = T 1 (x m 1 ) [0, x 0 ). There are constants c 1, c 2 such that c 1 m 1/α x m c 2 m 1/α. For the remainder of the proof we adopt the notation x m m 1/α to describe this situation, and use it in other situations too. Note that T kɛ 1 (ɛ) < x 0 T kɛ (ɛ) so x kɛ ɛ < x kɛ 1 and hence ɛ x kɛ k 1/α ɛ, establishing the estimate k ɛ ɛ α. (Uniform expansion of the induced map.) The map T 1 has countably many branches, each one of which maps onto (x 0, 1]. Note that τ 1 (xm,x m 1 ] = m. Moreover, there are constants Λ > 1 and 1 < D < such that T 1 (x) Λ and (T 1 i ) (x) (T 1 i ) (y) D whenever x, y are in the same monotonicity interval of T 1 i. For 1 i < l define τ i+1 (x) = τ i (x) + τ 1 (T τ i (x)) (for consistency of notation τ 0 = 0). Then τ i is the ith return time to (x 0, 1]. Sublemma 3.1.1. Let J be a monotonicity interval of (T 1 ) i and let τ i J = k 0. Then there is a constant c 3, independent of k 0, J such that P k+k 0 1 J c 3 λ(j) for all k 0. Proof of Sublemma 3.1.1: First, note that (T 1 ) i J = T k 0 : J onto (x 0, 1]. By the uniform distortion estimate, P k 0 1 J λ(j) 1 D (1 x 0 ) (x 0,1]. Since T itself has convex branches and bounded distortion, P1 (x0,1] is decreasing, and bounded above and away from zero; so Lemma 2.4 applies for a suitable choice of Ã. Consequently, there are K, c 0 such that P k 1 (x0,1] c 0 (1 x 0 ) for k > K. But for k K, P k 1 (x0,1] P k 1 (x0,1](1) (T (1)) k (T (1)) K 1 so put c 3 = min{(t D (1 x 0 (1)) K, c ) 0 (1 x 0 )}. The partition J will be constructed in several stages. Step 1. Assume ɛ is small enough that T (ɛ) < 2 ɛ and T (2 ɛ) < 3 ɛ (recall that T (x) x x 1+α ). Put k 1 = max{m x m > 2 ɛ} and ɛ 0 = x k1. With this choice x k1 +1 2ɛ < x k1 = T (x k1 +1) < T (2ɛ) < 3ɛ, so that [0, ɛ 0 ) is a suitable first interval for J ; let J 1 = {[0, ɛ 0 ]}. Notice that k 1 < τ 1 (x) k ɛ + 1 for all x [x kɛ+1, ɛ 0 ). With c 3 as in Sublemma 3.1.1 and k k ɛ + 1, P k 1 [0,ɛ0 ] k ɛ j=k 1 P k 1 (xj+1,x j ] k ɛ j=k 1 c 3 (x j x j+1 ) = c 3 (x k1 x kɛ+1) c 3 3 λ[0, ɛ 0]. Step 2. Let k 2 = min{k : x k x k+1 < 3 ɛ}. Now let J 2 consist of the intervals (x j+1, x j ] for k 2 j < k 1. All of these subintervals have length bounded by 3 ɛ ({x j x j+1 } j N is a decreasing sequence since T is expanding), and when k k 1 they satisfy P k 1 (xj+1,x j ] c 3 λ(x j+1, x j ] (by Sublemma 3.1.1, since τ 1 (x j+1, x j ) = j+1 k 1 ). Moreover, ɛ x k2 x k2 +1 x k2 +1 1+α so that k 2 ɛ α/(1+α). Step 3. The rest of J is constructed via a stopping time. Fix l such that 3 ɛλ l > (1 x 0 ). (Recall that Λ > 1.) Then l iterates of T 1 are sufficient

ULAM S METHOD WITH INDIFFERENT REPELLERS 11 for an interval of length 3 ɛ to cover (x 0, 1]. With this in mind, for each x (x k2, 1] let τ (x) = min{m : T m (x) < x k2 } and put τ(x) = min{τ l (x), τ (x)}. Sublemma 3.1.2 With τ, l, k 2 as above, τ l (k 2 + 1) and for small enough ɛ, τ k ɛ. Proof of Sublemma 3.1.2: Observe that if z (x k2, x 0 ] then τ 1 (z) k 2. Thus, τ 1 (y) > (k 2 + 1) {y (x 0, 1] and τ 1 (T (y)) > k 2 } T (y) < x k2 τ (y) = 1. Thus, τ 1 (T τ i(x) (x)) > (k 2 + 1) τ (T τ i(x) (x)) = 1. Now, for λ a.e x there is a j such that τ 1 (T τ i(x) (x)) (k 2 + 1) for 0 i < j and τ 1 (T τ j(x) (x)) > (k 2 + 1). If j l then τ l (x) l (k 2 + 1). Otherwise, j < l and τ (x) = τ j (x) + τ (T τ j(x) (x)) j (k 2 + 1) + 1. In either case this establishes the first bound in the sublemma. The second inequality follows because there are constants a 1, a 2, a 3 such that l a 1 log(1/ɛ), k 2 a 2 ɛ α/(1+α) and k ɛ a 3 ɛ α. Now subdivide (x k2, 1] into intervals of constant τ. Let the interval containing x be denoted J τ (x). If τ(x) = τ l (x) then J τ (x) is a monotonicity interval of T τ l = (T 1 ) l and has λ(j τ (x)) 3ɛ (by the choice of l). Let J 3 consist of these intervals. By the two sublemmas, whenever k l (k 2 + 1), P k 1 Jτ (x) c 3 λ(j τ (x)). Step 4. Consider now those intervals J τ (x) on which τ(x) = τ (x). Let J = J τ (x) be such an interval. For 0 < m < τ (x), T m (J ) (x k2, 1] and then T τ (x) (J ) = (0, x k2 ]. Now, J 1 J 2 is a partition of [0, x k2 ] into intervals of length at most 3 ɛ so that (T τ ) 1 (J 1 J 2 ) is a partition of J into intervals of length at most 3 ɛ (all iterates of T are expanding). Let J J be one of these subintervals and let I = T τ (J). Then I J 1 J 2 and P τ 1 λ(j) 1 J D 1 λ(i) I. Now apply the conclusions of steps 1 and 2 to obtain P k 1 J D 1 c 3 3 λ(j) provided that k k ɛ + 1 + τ. Let J 4 be the collection of these J such that T τ (J) J 1 and let J 5 consist of those J with T τ (J) J 2. Finally, let J = J 1 J 2 J 3 J 4 J 5. Note that worst intervals are those in J 4 : they give γ = c 3, and require the largest values of k; namely k > k 3 D ɛ + τ. But by Sublemma 3.1.2, τ k ɛ for small enough ɛ, so the proposition is proved. The final lemma will be used to control the approximation by E(J ).

12 RUA MURRAY Lemma 3.2. Let A be large enough that Proposition 1.1 holds. Let ɛ > 0 be given. Let J be a partition of [0, 1] such that [0, ɛ 0 ) J and max λ(j) < ɛ. Then, for every k 0: J J P k (f f n ) L 1 3 A 2 ɛ ɛ 0 α + J J a J (P k 1 J ) dλ where a J R (J J ), J J a J λ(j) = 0 and J J a J λ(j) f f n L 1. Proof. Note that f, f n C A and write (f f n ) = (f E(J )f) + (E(J )f n f n ) + E(J )(f f n ). Since P k (f E(J )f) L 1 f E(J )f L 1, Rthe first term is controlled by Lemma 2.3; J (f fn) dλ the second term is similar. Finally, put a J =. Then λ(j) J a J 1 J = E(J )(f f n ), so that J a Jλ(J) = (f f n ) dλ = 0 and J a J λ(j) = J (f f J n) dλ J f fn dλ = f f n L 1. Proof of Theorem 3. Let A be large enough that Proposition 1.1 holds, and let ɛ be small enough that Proposition 3.1 holds. Let J be the partition from the conclusion of Proposition 3.1. Combining Lemma 3.2 and equation (5) there are constants c 4, c 5 (independent of n and ɛ) such that (6) f f n L 1 c 4 k n α 1 + c 5 ɛ 1 α + J J a J (P k 1 J ) dλ. for every k 0 (the {a J } J J are as in Lemma 3.2). Now decompose J = J + J where J + = {J J a J 0}. Then, J J + a J λ(j) = J J ( a J )λ(j) = 1 2 J J a J λ(j). Let γ > 0 be from Proposition 3.1. φ + = J J + a J (P k 1 J γ λ(j)1) and φ = By Proposition 3.1, φ +, φ 0 when k 2 k ɛ and J a JP k 1 J dλ = φ + φ dλ φ + dλ + φ dλ Putting this estimate in (6) gives J J a J (P k 1 J γ λ(j)1). = (1 γ) J J + a J λ(j) + (1 γ) J J ( a J )λ(j) = 2 (1 γ) 1 2 J J a J λ(j) (1 γ) f f n L 1. γ f f n L 1 = f f n L 1 (1 γ) f f n L 1 c 4 2 k ɛ n α 1 + c 5 ɛ 1 α. Since there is a constant c 6 such that k ɛ c 6 ɛ α, we have (7) f f n L 1 2 c 4 c 6 γ ɛ α n α 1 + c 5 γ ɛ 1 α. Choosing ɛ = n α 1 completes the proof of Theorem 3.

ULAM S METHOD WITH INDIFFERENT REPELLERS 13 To prove the faster rate in Theorem 4, the perturbation induced by E(J n )C A needs to be controlled. Lemma 3.3. Let 0 < α < 1 and A > 0. Fix β 1. For each n, let z 1 α i = ( i n and let J n be the partition of [0, 1] with division points {z i } n i=0. There is a constant c β (independent of n, A) such that whenever h C A. { A 1 cβ h E(J n ) h L 1 Proof. As in the proof of Lemma 2.3, h E(J n ) h L 1 2 A z 1 α 1 + n if β > 1/(1 α) A c β log n n if β = 1/(1 α) n i=2 zi z i 1 h E(J n )h dλ 2 A n n + (z i z i 1 )(h(z i 1 ) h(z i )), i=2 because z 1 α 1 = n β (1 α) n 1 and h is decreasing. Letting y i = h(z i ) and applying summation by parts, the last sum becomes n 1 (8) (z n z n 1 )y n + (z 2 z 1 ) y 1 + y i ((z i+1 z i ) (z i z i 1 )). By Lemma 2.1(i), y i A z α i A( i n ) α β and by the mean value theorem (z i z i 1 ) i β 1. Thus (with a second application of the mean value theorem for the second order n β difference) there are constants c 7, c 8 such that (8) is bounded by The sum on the RHS is 1 n lemma follows. i=2 0 + A c 7 ( 1 n )(1 α) β + A c 8 n 1 i=2 i α β+β 2. n α β+β if (1 α)β > 1 and bounded by log n n ) β if (1 α) β = 1; the Proof of Theorem 4. Let A A. The existence of a unique fixed point g n C A for [E(J n )] P follows as in the proof of Theorem 2, except that Lemma 3.3 is used in place of Lemma 2.3. Again, using Lemma 3.3 in place of Lemma 2.3, equation (5) is replaced by f g n L 1 k c β A n 1 + P k (f g n ) L 1. Using Lemma 3.2 with g n instead of f n and Proposition 3.1 as in the proof of Theorem 3, one obtains f g n L 1 2 c 6 c β A ɛ α n 1 + c 5 γ γ ɛ1 α instead of (7). The theorem follows by choosing ɛ = n 1. Remark. Similar results to Theorem 3 and 4 can be obtained using mixing rates within a first return time tower built over [x 0, 1]. Using the partition J (ɛ) in Proposition 3.1, E(J )(f f n ) can be split as ϕ 1 +ϕ 2 where ϕ 2 is supported on the intervals in J 2 J 3 J 5 (see the proof of the proposition), and ϕ 1 has a component supported on intervals from

14 RUA MURRAY J 1 J 4 plus a correction term to ensure that ϕ 2 dλ = 0. Then ϕ 1 L 1 O(ɛ 1 α ), whereas P k ϕ 2 can be embedded as a Hölder function within the tower when k 2 k ɛ. Standard estimates on speed of convergence to equilibrium [21] can be used to control the latter term. Comparison with the results of Lin [14]. It is not clear whether the rate O(n (1 α)2 ) from Theorem 3 is sharp for uniform Ulam approximations. In view of Lemma 2.3, the error must be at least O(n (1 α) ). A recent numerical study of Lin [14] provides anecdotal evidence that Theorem 3 may be close to optimal. Lin examined the convergence rates of invariant density approximations for several classes of stochastically perturbed systems as the size of the perturbation was reduced to 0; the maps we call Example 2 were amongst those considered. Let ρ be the invariant density (that we call f), and let ρ ɛ be the density of the system perturbed with an O(ɛ) amount of noise. For each of α = 0.3, 0.5, 0.7, Lin estimated an exponent γ(α) from numerical data such that ρ ρ ɛ ɛ γ over a range of small ɛ. Lin obtained: γ(0.3) 0.53 ± 0.056, γ(0.5) 0.31 ± 0.028, γ(0.7) 0.17 ± 0.033. A uniform Ulam s method can be regarded as a stochastic perturbation with noise level ɛ = 1. Since the amount of noise in Lin s experiments did not appear to vary across [0, 1], n the situation is analogous to a uniform Ulam method, and one should compare with the rate in Theorem 3 (and Lemma 2.3). In view of this, if f f n ( 1 n )γ(α), one expects (1 α) 2 γ(α) (1 α); this is compatible with Lin s estimates. Acknowledgments I thank Chris Bose and Anthony Quas for their ongoing interest and encouragement with this project, including comments on the manuscript. I also thank them for hospitality at the University of Victoria where part of this work was done. References [1] A Boyarsky and P Góra. Laws of Chaos: invariant measures and dynamical systems in one dimension. Burkhäuser, 1997. [2] N Dunford and J Schwartz. Linear operators Part I: general theory. Interscience Publ., 1964. [3] G Froyland. Computer-assisted bounds for the rate of decay of correlations. Comm. Math. Phys., 189(1):237 257, 1997. [4] H Hu. Decay of correlations for piecewise smooth maps with indifferent fixed points. Ergodic Theory Dynam. Systems, 25:495 524, 2004. [5] S Isola. Renewal sequences and intermittency. J. Statist. Phys., 97:263 280, 1999. [6] S Isola. On the rate of convergence to equilibrium for countable ergodic Markov shifts. Markov Process. Related Fields, 9:487 512, 2003. [7] M S Keane, R D A Murray, and L-S Young. Computing invariant measures for expanding circle maps. Nonlinearity, 11:27 46, 1998. [8] G Keller. Stochastic stability in some chaotic dynamical systems. Monatsh. Math., 94:313 333, 1982. [9] G Keller and C Liverani. Stability of the spectrum of transfer operators. Ann. Scuola Norm. Sup. Pisa Cl. Sci., 28(4):141 152, 1999. [10] A Lasota and M C Mackey. Chaos, Fractals and Noise: stochastic aspects of deterministic dynamics. Springer, 2 edition, 1994. [11] A Lasota and J A Yorke. On the existence of invariant measures for piecewise monotonic transformations. Trans. Amer. Math. Soc., 186:481 488, 1973.

ULAM S METHOD WITH INDIFFERENT REPELLERS 15 [12] A Lasota and J A Yorke. Exact dynamical systems and the Frobenius Perron operator. Trans. Amer. Math. Soc., 273:375 384, 1982. [13] T-Y Li. Finite approximation for the Perron Frobenius operator. a solution to Ulam s conjecture. J. Approx. Theory, 17:177 186, 1976. [14] K Lin. Convergence of invariant densities in the small-noise limit. Nonlinearity, 18:659 683, 2005. [15] C Liverani. Rigourous numerical investigation of the statistical properties of piecewise expanding maps. A feasibility study. Nonlinearity, 14:463 490, 2001. [16] C Liverani, B Saussol, and S Vaienti. A probabilistic approach to intermittency. Ergodic Theory Dynam. Systems, 19:671 685, 1999. [17] R Murray. Approximation error for invariant density calculations. Discrete Contin. Dyn. Syst., 4:535 558, 1998. [18] M Pollicott and M Yuri. Statistical properties of maps with indifferent periodic points. Commun. Math. Phys., 217:503 520, 2001. [19] Y Pomeau and P Manneville. Intermittent transition to turbulence in dissipative dynamical systems. Commun. Math. Phys., 74:189 197, 1980. [20] S Ulam. A collection of mathematical problems. Interscience Publ., 1960. [21] L-S Young. Recurrence times and rates of mixing. Israel J. Math., 110:153 188, 1999. Department of Mathematics, University of Waikato, Private Bag 3105, Hamilton, New Zealand, Email: r.murray@math.waikato.ac.nz