x + 1/2 ; 0 x 1/2 f(x) = 2 2x ; 1/2 < x 1

Similar documents
25.1 Ergodicity and Metric Transitivity

Example Chaotic Maps (that you can analyze)

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

The Almost-Sure Ergodic Theorem

PHY411 Lecture notes Part 5

Lecture Notes Introduction to Ergodic Theory

11 Chaos in Continuous Dynamical Systems.

And, even if it is square, we may not be able to use EROs to get to the identity matrix. Consider

1.7. Stability and attractors. Consider the autonomous differential equation. (7.1) ẋ = f(x),

Problem: A class of dynamical systems characterized by a fast divergence of the orbits. A paradigmatic example: the Arnold cat.

One dimensional Maps

Metric Spaces and Topology

Chaos and Liapunov exponents

CHAOTIC UNIMODAL AND BIMODAL MAPS

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

Lecture 6 Random walks - advanced methods

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

1 Lecture 25: Extreme values

Lecture 3 - Tuesday July 5th

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Lecture 9: Taylor Series

LECTURE 8: DYNAMICAL SYSTEMS 7

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

THE INVERSE FUNCTION THEOREM

1 Lyapunov theory of stability

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

ZEROES OF INTEGER LINEAR RECURRENCES. 1. Introduction. 4 ( )( 2 1) n

F (x) = P [X x[. DF1 F is nondecreasing. DF2 F is right-continuous

DYNAMICAL SYSTEMS

Lecture 3 (Limits and Derivatives)

Simple Iteration, cont d

Pseudorandom Generators

Proving languages to be nonregular

Chaos and Cryptography

RANDOMNESS AND NON-ERGODIC SYSTEMS

6c Lecture 14: May 14, 2014

Mathematics for Game Theory

LIMITS AND DERIVATIVES

DS-GA 1002 Lecture notes 2 Fall Random variables

Lecture 1 Monday, January 14

Schwarzian Derivatives and Cylinder Maps

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

Part 2 Continuous functions and their properties

3/1/2016. Intermediate Microeconomics W3211. Lecture 3: Preferences and Choice. Today s Aims. The Story So Far. A Short Diversion: Proofs

B553 Lecture 1: Calculus Review

Review: Limits of Functions - 10/7/16

Chaos induced by coupled-expanding maps

dynamical zeta functions: what, why and what are the good for?

Advanced Calculus I Chapter 2 & 3 Homework Solutions October 30, Prove that f has a limit at 2 and x + 2 find it. f(x) = 2x2 + 3x 2 x + 2

Relevant section from Encounters with Chaos and Fractals, by D. Gulick: 2.3 "Conjugacy"

Differential Analysis II Lectures 22-24

Periodic Sinks and Observable Chaos

Persistent Chaos in High-Dimensional Neural Networks

Math WW08 Solutions November 19, 2008

I know this time s homework is hard... So I ll do as much detail as I can.

ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM

Functions and cardinality (solutions) sections A and F TA: Clive Newstead 6 th May 2014

Math Lecture 3 Notes

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 8: Basic Lyapunov Stability Theory

3 Stability and Lyapunov Functions

Symbolic dynamics and chaos in plane Couette flow

NOTES ON EXISTENCE AND UNIQUENESS THEOREMS FOR ODES

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Preliminary draft only: please check for final version

INVERSE FUNCTION THEOREM and SURFACES IN R n

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

Devil s Staircase Rotation Number of Outer Billiard with Polygonal Invariant Curves

REAL ANALYSIS LECTURE NOTES: 1.4 OUTER MEASURE

Handout 2: Invariant Sets and Stability

ME 680- Spring Representation and Stability Concepts

STA205 Probability: Week 8 R. Wolpert

Chaos, Complexity, and Inference (36-462)

1 The Local-to-Global Lemma

We consider the problem of finding a polynomial that interpolates a given set of values:

Mathematics-I Prof. S.K. Ray Department of Mathematics and Statistics Indian Institute of Technology, Kanpur. Lecture 1 Real Numbers

Math 354 Transition graphs and subshifts November 26, 2014

Robustly transitive diffeomorphisms

ORBITAL SHADOWING, INTERNAL CHAIN TRANSITIVITY

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

Is the Hénon map chaotic

arxiv: v1 [math.co] 29 Apr 2013

v( x) u( y) dy for any r > 0, B r ( x) Ω, or equivalently u( w) ds for any r > 0, B r ( x) Ω, or ( not really) equivalently if v exists, v 0.

Synchronization, Chaos, and the Dynamics of Coupled Oscillators. Supplemental 1. Winter Zachary Adams Undergraduate in Mathematics and Biology

Connor Ahlbach Math 308 Notes

Solutions to Problem Set 5 for , Fall 2007

A PROOF THAT S-UNIMODAL MAPS ARE COLLET-ECKMANN MAPS IN A SPECIFIC RANGE OF THEIR BIFURCATION PARAMETERS. Zeraoulia Elhadj and J. C.

MATH41011/MATH61011: FOURIER SERIES AND LEBESGUE INTEGRATION. Extra Reading Material for Level 4 and Level 6

Lecture 6: Finite Fields

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Introduction to Numerical Analysis

Dynamical Systems 2, MA 761

Theorem (Special Case of Ramsey s Theorem) R(k, l) is finite. Furthermore, it satisfies,

2 = = 0 Thus, the number which is largest in magnitude is equal to the number which is smallest in magnitude.

Models of Computation,

SRB measures for non-uniformly hyperbolic systems

Richard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration

Lecture 10: Powers of Matrices, Difference Equations

Differentiation of Measures and Functions

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

Introduction to Classical Chaos

Transcription:

Lecture 3: Invariant Measures Finding the attracting set of a dynamical system tells us asymptotically what parts of the phase space are visited, but that is not the only thing of interest about the long-time behavior of a dynamical system. For instance, a typical trajectory of the system may spend a lot of time in one part of the attracting set, and very little in another part. An invariant measure on the attractor tells us how much time on average, a trajectory spends in a given region. Example 3. Consider the dynamical system defined by the one-dimensional map { x + /2 ; 0 x /2 f(x) = 2 2x ; /2 < x If we compute the Lyapunov exponent of the dynamical numerically, we find that after 000 iterations one has the following approximation to the Lyapunov exponents for three choices of initial point x 0. x 0 = 0.3333 γ 0.4588... x 0 = 0.402 γ 0.4637... x 0 = 0.7 γ 0.4644... Clearly, there seems to be a well defined value for the Lyapunov exponent (in fact, we new that must be true by Oseledec s theorem). However, it is not so clear how to compute it. There is no tendency for nearby trajectories to separate when they are in the interval between 0 and /2, while in the interval from /2 to, the separation between two nearby trajectories doubles with each iteration. Thus to determine the Lyapunov exponent theoretically, we need to now how much of an orbit is spent in the interval [0, /2] and how much in the interval from [/2, ]. Intuitively, this means that we want to define a measure ρ on the phase space, so that ρ(o) = {fraction of points in an orbit that lie in O} Note that such a measure will be concentrated on the attractor, since asymptotically an arbitrarily large fraction of the points in the orbit lie in an arbitrarily small neighborhood of the attractor. These lecture notes are for the course MA 574, and are being prepared for the instructor s personal use. They are not for public distribution. c 999, C.E. Wayne

Let {x 0, x, x 2,...} be the orbits of a discrete dynamical system with initial condition x 0. We define the invariant measure associated with this orbit to be ρ = lim δ xj () if this limit exists. Here δ x is the Dirac δ measure concentrated at the point x j. Note that if O is any (measurable) set, ρ(o) = lim δ xj (ξ)dξ O = lim number of points in {x 0,..., x j } Thus, if the limit in () exists, this measure accords exactly with our intuitive idea what the measure should be. Above, I referred to this measure as an invariant measure. Let me now explain what I mean by that. Suppose that φ is any (measureable) function. Then assuming as always that the limit in () maes sense, the integral of φ with respect to the measure ρ is defined as φ(ξ)dρ(ξ) = lim φ(ξ)δ xj (ξ)dξ = lim φ(x j ). (2) We say that the measure ρ is invariant for the discrete dynamical system defined by the function f if for every continuous function φ, φ f(ξ)dρ(ξ) = φ(ξ)dρ(ξ) (3) That is, composing the integrand with the function f that defines the dynamical system doesn t change the integral. Lemma 3.2 The measure defined by () is invariant. Proof:By (), φ f(ξ)dρ(ξ) = lim φ f(x j ) = lim φ(x j+ ) = lim { φ(x j ) + φ(x ) φ(x 0 )} = lim φ(x j ) + lim (φ(x ) φ(x 0 )) = lim = φ(ξ)dξ φ(x j ) 2

Remar 3.3 It is often common to define an invariant measure in terms of the measure of a set O rather than the integral of a function φ. In this approach, one says that ρ is invariant if for every measurable set O, ρ(o) = ρ(f (O)). (4) Although the appearance of the inverse of f rather than f itself may be surprising, this definition is equivalent to the definition in (3). To see why, tae the function φ in (3) to be the characteristic function of the set O. (Strictly speaing, we defined (3) to hold only for continuous functions and the characteristic function of O is not continuous, however, it can be arbitrarily well approximated by continuous functions, and this suffices to show that (3) also holds for such functions.) Then φ(ξ)dρ(ξ) = dρ(ξ) = ρ(o). On the other hand, φ f(ξ)dρ(ξ) = f(ξ) O ξ O dρ(ξ) = ξ f (O) dρ(ξ) = ρ(f (O)). Since these two expressions are equal by (3), we see that (4) follows. By a similar argument, one can show that (3) follows from (4) Remar 3.4 Note that the measure defined by () has the property that dρ(ξ) =. We will require this of all the measures we consider, and hence refer to them as invariant probability measures. This normalization is quite natural in light of our interpretation of these measures as the fraction of the total time that an orbit spends in some part of the phase space. Thus far, we have begged the question of whether such invariant probability measures actually exist we have simply assumed that the limit in () was well defined. We now show that there are in fact, lots of such measures in a typical dynamical system. Example 3.5 Let {x 0, x,..., x } be a periodic orbit of length for a dynamical system defined by the function f. Define ρ = δ x j. Note that since there is no limit here, the measure is well defined. Furthermore, it is clearly normalized so that dρ =. All that remains is to chec that it is invariant. Given any set O, ρ(o) = ξ O δ xj (ξ)dξ = #{{x 0, x,..., x } O} (5) while ρ(f (O)) = f(ξ) O δ xj (ξ)dξ = #{{f(x 0), f(x ),..., f(x )} O} But {f(x 0 ), f(x ),..., f(x )} = {x, x 2,..., x 0 }, because {x 0, x,..., x } is a periodic orbit, so (5) and (6) are equal, and the measure is invariant. Thus, we see that at the very least, every dynamical system which has a periodic orbit has an invariant probability measure. In fact, one has in general (6) 3

Theorem 3.6 If the function f defining the dynamical system maps some closed, bounded subset of R n to itself, then f has at least one invariant probability measure. Indeed, the problem in general is not that there are too few invariant measures, but rather too many! Example 3.5 showed that there is an invariant probability measure for every periodic orbit in the system. The tent map, for example, has infinitely many periodic orbits and hence infinitely many invariant measures. This is typical of chaotic attractors. What s worse (or better, depending on your point of view) is that given any two distinct, invariant probability measures, ρ and ρ 2, one can combine them to get a third by taing ρ = ρ + ρ 2. In order to restrict the class of invariant measures we must study, we will rule out combinations of this type by focusing on ergodic measures. These are measures that are indecomposable. Definition 3.7 An invariant probability measure ρ is ergodic if it cannot be written as ρ = 2 (ρ + ρ 2 ), for ρ and ρ 2 two distinct, invariant, probability measures. Remar 3.8 Concentrating our attention to ergodic measures is not such a great restriction since (subject to mild technical restrictions), any invariant probability measure can be decomposed into a sum (or integral) of ergodic, invariant probability measures. As an exercise, show that if {x 0, x,..., x } is a periodic orbit, the measure ρ = δ x j is an ergodic measure. An alternative (but equivalent) definition of ergodicity which is frequently useful is that ρ is an ergodic invariant probability measure for the dynamical system defined by f if all invariant sets of f have measure either zero or one. Symbolically, we mean that if f (O) = O, then ρ(o) is either zero or one. It s a very good exercise to show why this definition is equivalent to our previous one. Aside from being the basic building blocs for invariant probability measures, ergodic measures have another extremely important property they allow one to introduce averages over time and space. Many of the quantities that one measures in a dynamical system require one to evaluate something at each point along an orbit of the dynamical system, and then average the result over time. This is difficult for (at least) two reasons. First, it may tae a long time in order to collect enough data to insure a good estimate of the average. Secondly, one would have to wonder whether or not one would get the same answer if one averaged over another orbit with different initial conditions. For a dynamical system with an ergodic, invariant probability measure, such questions are answered by the ergodic theorem. Theorem 3.9 (Ergodic Theorem) Suppose that ρ is an ergodic, invariant probability measure for x m+ = f(x m ). Let φ be an integrable function. Then lim φ(x j ) = φ(ξ)dρ(ξ), (7) for almost every choice of initial condition x 0. Remar 3.0 The qualification almost ever x 0 means that if B = {x 0 (7) fails to hold} then ρ(b) = 0. 4

Remar 3. Note that two things are being asserted here. First that the time-average of φ exists for almost every orbit, and secondly that this time average equals the space average, i.e. the integral of φ with respect to ρ. Remar 3.2 In fact, the ergodic theorem is considerably more general than I have stated here. It actually shows that for any invariant measure ρ, the time averages exist on almost all orbits. However, for non-ergodic measures we can t expect that these time averages will equal the spatial averages. Proof:The hard part of this theorem is showing that the time averages actually exist. However, this seems intuitively reasonable (I thin!) so I will assume they exist and show that they equal the spatial average given on on the right-hand-side of (7). Let T (x 0 ) = lim φ(x j ), and assume that this limit exists, at least for almost every x 0. Define M = {x 0 (7) holds}. That is, M is the set of points at which the space and time averages are equal. Note that because ρ is an invariant measure, M is also invariant i.e. M = f (M). Therefore, since ρ is ergodic, ρ(m) is either zero or one. If M has measure one, the theorem is true, so we assume that ρ(m) = 0 and show that this leads to a contradiction. Define M > = {x 0 T (x 0 ) > φdρ}, and M < = {x 0 T (x 0 ) < φdρ}. Both M > and M < are invariant sets, and hence each has measure either zero or one. If the measure of M is zero, either M > or M < must have measure one. Assume that ρ(m > ) =. Then T dρ = M T dρ(ξ) > φdρ. > But T dρ = lim φ(f () (ξ)dρ(ξ) = φdρ, using the fact that ρ is an invariant measure. This is a contradiction, and hence ρ(m) =. Example 3.3 Let s now return to the Example 3. with which we started this lecture. Note that for a one dimensional map, the Lyapunov exponent is γ = lim (log Df(x ) + log Df(x ) +... + log Df(x 0 ) ) Thus, it is the time-average of log Df(x). We will try to compute it by finding the physically relevant invariant measure. There will be lots of invariant measures concentrated on periodic orbits, but those are atypical because typical orbits for this dynamical system don t fall on a periodic orbit, but wander around all over the interval from [0, ]. Since the behavior of the function has constant slope on the interval [0, /2], we mae the guess that the invariant measure has a constant density w on the interval [0, /2], and similarly that it has some other density w 2 on the interval [/2, ]. To figure out what w and w 2 are we use the fact that the measure is invariant, so that ρ(i) = ρ(f (I)) for any interval I. If I is a subinterval of [0, /2], it s preimage is an interval of half the length in [/2, ]. Thus, we expect that w = (/2)w 2. On the other hand, since this is a probability measure, the total measure of the interval [0, ] must be. Thus, (/2)w + (/2)w 2 =. These two equations together imply that w = 2/3, and w 2 = 4/3. I leave it as an exercise to show that this measure is ergodic, but assuming that it is, we see that by the ergodic theorem, the Lyapunov exponent (for almost all choices of the initial condition x 0 should be γ = log Df(x) dρ(x) = (/2)w log() + (/2)w 2 log(2) = (2/3) log 2 = 0.462... which is in good agreement with our numerical calculations. 5