Estimating Lyapunov Exponents from Time Series Gerrit Ansmann
Example: Lorenz system. Two identical Lorenz systems with initial conditions; one is slightly perturbed (10 14 ) at t = 30: 20 x 2 1 0 20 20 x 1 1 0 20 10 10 x 1 x 2 1 10 10 0 30 t 60 90 2
The Largest Lyapunov Exponent Consider evolution of two close trajectories x and y: y(t ) x(t ) x(t +τ) y(t +τ) Then their distance grows or shrinks exponenentially: x(t + τ) y(t + τ) = x(t ) y(t ) e λ 1τ For: infinitesimally close trajectories ( x(t ) y(t ) 0) infinite time evolution (τ ) Note: In this entire lecture, τ is not the embedding delay. 3
The Largest Lyapunov Exponent Definition x(t + τ) y(t + τ) = x(t ) y(t ) e λ 1τ Solve for λ 1 and implement the limits:. First Lyapunov exponent. Let x and y be two trajectories of the dynamics.. λ 1 = lim τ lim x(t ) y(t ) 0 1 τ ln x(t + τ) y(t + τ) ( x(t ) y(t ) ) Also: largest Lyapunov exponent or just Lyapunov exponent. 4
Typical Evolution of a Trajectory Distance. Two identical Lorenz systems with initial conditions; one is slightly perturbed (10 14 ) at t = 30: 20 x 2 1 0 20 20 x 1 1 0 20 10 10 x 1 x 2 1 10 10 0 30 t 60 90 5
Comparison with Evolution of Infinitesimal Distance Two identical Lorenz systems with initial conditions; one is slightly perturbed (10 14 ) at t = 30: 20 x 2 1 0 20 20 x 1 1 0 20 x 1 x 2, v 10 10 1 10 10 0 30 t 60 90 5
Typical Evolution of a Trajectory Distance. Regimes of the average distance D: 3 D (τ) 2 1 0 τ 6
Typical Evolution of a Trajectory Distance. Regimes of the average distance D: 1. alignment to direction of largest growth: 3 d D(τ) c i (τ) exp ( λ i τ) i =1 D (τ) 2 Asymptotically: c i (τ) 0 for i > 1 c 1 (τ) 1 0 τ 6
Typical Evolution of a Trajectory Distance. Regimes of the average distance D: 1. alignment to direction of largest growth: 3 d D(τ) c i (τ) exp ( λ i τ) i =1 D (τ) 0 1 2 τ Asymptotically: c i (τ) 0 for i > 1 c 1 (τ) 2. exponential growth: D(τ) exp ( λ 1 τ) 6
Typical Evolution of a Trajectory Distance. Regimes of the average distance D: 1. alignment to direction of largest growth: 3 d D(τ) c i (τ) exp ( λ i τ) i =1 D (τ) 0 1 2 τ Asymptotically: c i (τ) 0 for i > 1 c 1 (τ) 2. exponential growth: D(τ) exp ( λ 1 τ) 3. constancy on the scale of the attractor: D(τ) diam(a) 6
Translation to Time Series continuous trajectories discrete trajectories actual phase space reconstruction evolution of arbitrary states available trajectories And of course: finite data, noise, 7
Wolf Algorithm Acquisition of Instantaneous Lyapunov Exponents x(0) x(τ) x(t₁) x(t₁+τ) 1. Let x(0) be the first reconstructed state. 2. Find a state x(t 1 ) such that x(t 1 ) x(0) < ε. 8
Wolf Algorithm Acquisition of Instantaneous Lyapunov Exponents x(0) ε x(t₁) 1. Let x(0) be the first reconstructed state. x(τ) x(t₁+τ) 2. Find a state x(t 1 ) such that x(t 1 ) x(0) < ε. 8
Wolf Algorithm Acquisition of Instantaneous Lyapunov Exponents x(0) ε x(t₁) x(τ) ε x(t₂) 1. Let x(0) be the first reconstructed state. x(t₁+τ) 2. Find a state x(t 1 ) such that x(t 1 ) x(0) < ε. 3. Approximate instantaneous largest Lyapunov exponent: λ 1 (0) = 1 τ ln ( x(τ) x(t 1 +τ) x(0) x(t 1 ) ). 8
Wolf Algorithm Acquisition of Instantaneous Lyapunov Exponents x(0) ε x(t₁) x(τ) ε x(t₂) 1. Let x(0) be the first reconstructed state. x(t₁+τ) 2. Find a state x(t 1 ) such that x(t 1 ) x(0) < ε. 3. Approximate instantaneous largest Lyapunov exponent: λ 1 (0) = 1 τ ln ( x(τ) x(t 1 +τ) x(0) x(t 1 ) ). 4. Find a state x(t 2 ) such that x(t 2 ) x(τ) < ε and x(t 2 ) x(t + τ) is nearly parallel to x(t 1 + τ) x(t + τ). 8
Wolf Algorithm Acquisition of Instantaneous Lyapunov Exponents x(0) ε x(t₁) x(τ) ε x(t₂) 1. Let x(0) be the first reconstructed state. x(t₁+τ) 2. Find a state x(t 1 ) such that x(t 1 ) x(0) < ε. 3. Approximate instantaneous largest Lyapunov exponent: λ 1 (0) = 1 τ ln ( x(τ) x(t 1 +τ) x(0) x(t 1 ) ). 4. Find a state x(t 2 ) such that x(t 2 ) x(τ) < ε and x(t 2 ) x(t + τ) is nearly parallel to x(t 1 + τ) x(t + τ). 5. Approximate instantaneous largest Lyapunov exponent: λ 1 (τ) = 1 τ ln ( x(2τ) x(t 2 +τ) x(τ) x(t 2 ) ). 8
Wolf Algorithm Averaging After acquiring the local Lyapunov exponents, estimate: λ 1 = 1 I r I i =r λ 1 (i τ) The offset r ensures that the distances are aligned to the direction of largest growth. 9
Wolf Algorithm Parameters x(0) ε x(t₁) x(τ) ε x(t₂) x(t₁+τ) Initial distance ε: too small impact of noise too high too large small region of exponential growth Rescaling time τ: too high distance reaches size of attractor too small small region of exponential growth 10
Wolf Algorithm Problems Parameters have to be chosen a priori. Problems may be obfuscated: no exponential growth due to noise embedding dimension m too small Sensitivity to noise. Difficult to find neighbouring trajectory segment with required properties. Different way to ensure alignment to direction of largest growth. 11
Rosenstein Kantz Algorithm. x(t ) ε 1. For a given reference state x(t ), find all states x(t 1 ),, x(t u ) for which x(t ) x(t j ) < ε. 12
Rosenstein Kantz Algorithm. x(t ) ε x(t₂) x(t₁) 1. For a given reference state x(t ), find all states x(t 1 ),, x(t u ) for which x(t ) x(t j ) < ε. 12
Rosenstein Kantz Algorithm. x(t ) ε x(t₂) x(t₁) 1. For a given reference state x(t ), find all states x(t 1 ),, x(t u ) for which x(t ) x(t j ) < ε. 2. For a given τ, define the average distance of the respective trajectory segments from the initial one u s(t,τ) = 1 u x(t + τ) x(t j + τ) j =1 12
Rosenstein Kantz Algorithm. x(t ) ε x(t₂) x(t₁) 1. For a given reference state x(t ), find all states x(t 1 ),, x(t u ) for which x(t ) x(t j ) < ε. 2. For a given τ, define the average distance of the respective trajectory segments from the initial one u s(t,τ) = 1 u x(t + τ) x(t j + τ) j =1 3. Average over all states as reference states: S(τ) = 1 N N t =1 s(t,τ) 12
Rosenstein Kantz Algorithm. x(t ) ε x(t₂) x(t₁) 1. For a given reference state x(t ), find all states x(t 1 ),, x(t u ) for which x(t ) x(t j ) < ε. 2. For a given τ, define the average distance of the respective trajectory segments from the initial one u s(t,τ) = 1 u x(t + τ) x(t j + τ) j =1 3. Average over all states as reference states: S(τ) = 1 N N t =1 s(t,τ) 4. Obtain λ 1 from region of exponential growth of S(τ). 12
Rosenstein Kantz Algorithm Example (Different lines correspond to different ε and m.) adapted from H. Kantz, A robust method to estimate the maximal Lyapunov exponent of a time series, Phys. Let. A 185 (1994) 13
Rosenstein Kantz Algorithm Mind How You Average 1. Average over the neighbourhood of a reference state s(t,τ). 2. Average s(t,τ) over all reference states S(τ). 3. Obtain λ 1 from slope of S(τ). Density of states in a region of the attractor affects: reference states states in neighbourhood of given reference state Separating the averaging in steps 1 and 2 (instead of averaging of all pairs closer than ε) ensures that density is accounted for only once (and not twice). 14
Rosenstein Kantz Algorithm Advantages and Problems Region of exponential growth can be determined a posteriori. Be careful of wishful thinking though. Absense of exponential growth usually detectable (but only usually). Region of strong noise influence can be detected and excluded. Can only determine the largest Lyapunov exponent. 15
Extensions and Alternatives tangent-space methods require estimate of Jacobian further Lyapunov exponents requires a lot of data 16
Lyapunov Spectrum and Types of Dynamics For bounded, continuous-time dynamical systems: signs of Lyapunov exponents Dynamics,,, fixed point +, ++, +++,, +0, ++0, not possible (unbounded) 0, 00, 000, no dynamics (f = 0) 0, 0, 0, periodic / limit cycle 00, 00, 00, quasiperiodic (torus) 000, 0000,, 000, quasiperiodic (hypertorus) +0, +0, +0, chaos ++0, +++0,, ++0, hyperchaos, noise 17
Interpretation Stability and type of the dynamics: λ 1 > 0 chaos, instable dynamics λ 1 = 0 regular dynamics λ 1 < 0 fixed-point dynamics Quantification of loss of information. Prediction horizon: τ p ln(ρ) i,λi >0 λ i ρ: Accuracy of measurement (initial state). i,λi >0 : Sum of positive Lyapunov exponents. 18