Nonlinear Dynamics & Chaos MAT D J B Lloyd. Department of Mathematics University of Surrey

Size: px
Start display at page:

Download "Nonlinear Dynamics & Chaos MAT D J B Lloyd. Department of Mathematics University of Surrey"

Transcription

1 Nonlinear Dynamics & Chaos MAT37 2 D J B Lloyd Department of Mathematics University of Surrey March 2, 2

2 Contents Introduction 4 2 Mathematical Models 3 Key types of Solutions 3 3. Steady states Periodic orbits Chaotic dynamics Phase portraits and topological equivalence: flows 7 4. D Phase portraits D Phase portraits Stability and Eigenvalues Stable and Unstable Manifolds Bifurcations: flows Saddle-Node Bifurcation Transcritical bifurcation Pitchfork bifurcation Hopf bifurcation Global bifurcations Homoclinic bifurcation Period doubling route to chaos One-dimensional maps Cobwebs Topological conjugacy and linearisation Period doubling bifurcation Periodic windows and Intermittency Period 3 Implies Chaos Lyapunov Eponents Universality and Re-normalisation Chaotic D maps The Doubling map The Tent map Symbolic Dynamics Fractals Self-similarity Fractal Dimension Strange Attractors and Repellers Attractors D maps

3 Application: Secret communication with chaos 77 Matlab codes 8 3

4 Introduction The study of chaotic systems and fractals is a relatively new mathematics topic. A long, long time ago, we had no means to predict the future, understand the laws of motion, eplain what lead certain natural events. Over centuries, several physicists, chemists, biologist and mathematicians started to unravel the laws of nature and develop techniques that allowed us to predict the future. In particular, a certain Issac Newton developed calculus and started applying it to all sorts of problems from the motion of the planets, tides, mechanics and was able for the first time to predict accurately what would happen. Over time, science found out that nature could be predicted, analysed, recorded and eploited for our benefit. By the 8th century, scientists had been so successful in understanding nature that many thought that there was nothing left to discover. The laws of nature were written down as dynamical systems describing the motion of every particle in the universe, eactly and forever. The goal of the scientist was to eamine the implications of the laws for any particular phenomena of interest. As Ian Stewart (Professor of Mathematics at the University of Warwick) wrote in his ecellent book Does God play dice? [4], chaos gave way to a clockwork world. However, it turns out not to be that easily predicting the future... Course tet book: Both the lectures and notes are strongly based on the ecellent course notes of Prof. Bernd Krauskopf, Dr. Hinke Onsinga [2] and Prof. John Hogan [] at the University of Bristol and the ecellent tet book by Steven Strogatz, Nonlinear Dynamics and Chaos, published by Westview [5]. Eercises and questions are frequently taken from the tet book where some hints and answers can be found at the back of the book. I strongly recommend getting a copy of this book, though not essential, will help (I don t get any royalties :-) Matlab codes and Movies: All the computations in these notes are carried out using Matlab. The codes for all the computations can be found on the Nonlinear Dynamics & Chaos website [ It is hoped that students will use these codes to eplore the phenomena discussed in lectures. Both Matlab codes and movies demonstrating chaotic phenomena will also be presented in lectures. At the end of these notes is a listing of Matlab codes and how to use them in. Hyper-linked notes: Considerable effort has gone into hyper-linking these pdf notes. This means that if you have an internet connection, clicking on the web-links will take your web browser there. Furthermore, sections, equations, figures, eamples, theorems etc. are also hyper-linked allowing you to easily move through the pdf document. I hope you enjoy this course! 4

5 Outline of the lectures:. Outline of module, eamples of dynamical systems and observations of chaos: chaotic waterwheel and chaotic double pendulum. 2. Definition of dynamical systems and topological equivalence. 3. Phase portraits, linearisation and classification of equilibria 4. Linearisation of nonlinear flows, stable and unstable manifolds 5. Eamples class going over eercise sheet 6. Introduction to bifurcations, saddle-node bifurcation 7. Pitchfork bifurcation and Hopf bifurcation 8. Hopf bifurcation analysis and chemical oscillations 9. Global bifurcations: homoclinic bifurcation. Global bifurcations: period doubling route to chaos. The Rössler system.. Poincaré sections, period doubling bifurcation and Lorenz maps. 2. Eamples class going over eercise sheet 2 3. Introduction to D maps 4. Linearisation, phase portraits, cobwebs, topological conjugacy 5. Saddle-node and transcritical bifurcation in maps 6. Period doubling bifurcation in maps 7. Periodic windows and intermittency 8. Lyapunov eponents 9. Introduction to renormalisation 2. Renormalisation 2. Eamples class going over eercise sheet introduction to chaotic D maps. Symbolic dynamics, shift map 23. Introduction to Fractals, Cantor set and Kock curve 24. Similarity and bo dimension 25. Strange attractors, strangle saddles 26. Eamples class going over eercise sheet Application: Secret communication with chaos. Movie 28. Eamples class going over worksheet 5 5

6 Some eamples of dynamical systems: Eample : Newton s cooling One of the first things Newton described using calculus was the cooling of a liquid. Eperimentally, he found that the rate of cooling was directly proportional to the temperature difference between the object and the surrounding background. In terms of a dynamical system, Newton s cooling law can be written as d dt = α( s), () where = (t) is the temperature of the liquid, α > is the constant of proportionality and describes the material properties of the liquid (water will cool down at a different rate to say honey). s is the background air temperature (e.g., 2 Celsius). If we know the initial temperature of the liquid, then () has a unique solution for all time! Let us take the initial temperature () = and solving the ODE () yields (t) = ( s )e αt + s = Φ t ( ), where Φ t is known as the evolution operator. Given any, s and a value of t, we can find a unique solution. The evolution operator defines a flow starting from. Note that the equilibrium (steady) state given by d/dt = is = s. The solution (t) s as t since e αt. If we take = 5, s = 25 and α =.5 we get the following solution (t) (t) =Φ t ( ) 35 3 s Figure : Solution of Newton s cooling () for = 5, s = 25 and α =.5. Note the solution is unique! i.e., there is only one curve emanating from () = 5. t Eample 2: Population dynamics There have been many laboratory studies of the population dynamics of single species e.g., flies, worms, ants etc. Thomas Malthus 798, was the first to write down some rules for the evolution of the size of populations. A natural law to write down for the rate of change of the population is dn dt = births deaths + migration, (2) where N(t) is the size of the population at time t. This equation is called the conservation equation for the population. The simplest possible model has no migration (for eample in a closed lab) and the births and deaths are proportional to the current population size N(t) e.g., dn dt = bn dn N(t) = N e (b d)t = Φ t (N ), (3) where b, d are positive constants and the initial population is N() = N. Again we can write down the evolution operator Φ t = e (b d)t that describes the evolution (flow) from N to N(t) for some given t. 6

7 From (3), we find that the population grows eponentially if b > d or dies out. Not particularly realistic since one would epect there to be some sort of self-limiting process stopping the eponential growth when the population becomes too large. A more realistic model is if the number of deaths increased if the population is too large e.g., d = fn(t). Now equation (2) becomes dn dt = bn fn2 = rn( N/K), (4) where now the per capita birth rate is r( N/K) and K is the carrying capacity of the environment that is determined by the available sustainable resources and r, K >. We have also set b = r and f = r/k. As for Newton s cooling, we can calculate the steady steady states by setting dn/dt = to find N = N s = (i.e, no population) and N = N s = K (the population have reached the carrying capacity of the environment). What happens if we have a population whose initial size is close to either N() = or N() = K? To do this we look at the the evolution of a small perturbation Ñ(t) where N(t) (very much less than ) N(t) = N s + Ñ(t) (5) where N s = or N s = K are the steady states. Now if we start off with Ñ() small, then if Ñ(t) the population size N(t) N s and we call N s a stable steady state. On the other hand, if Ñ(t) becomes large, then the population size N(t) does not stay close to N s and we call N s an unstable steady state. We can find out what happens to Ñ(t) by substituting (5) into equation (4) to find d[n s + Ñ(t)] dt dn s + dñ dt dt = r[n s + Ñ(t)]( [N s + Ñ(t)]/K), = rn s + rñ r K (N 2 s + 2N s Ñ + Ñ 2 ), = rn s ( N s /K) + rñ( 2N s/k Ñ/K). Now, we know that dn s /dt = since it is an equilibrium (by definition) and also rn s ( N s /K) = since N s solves the right hand side of equation (4) equal to zero. So we have dñ dt = rñ( 2N s/k Ñ/K). But, if Ñ is very small then Ñ 2 is even smaller and we can ignore the Ñ 2 term to get dñ dt rñ( 2N s/k) If we set N s = in this equation, we find Ñ(t) Cert. Since r >, e rt as t and hence Ñ(t) becomes very large. Therefore N s = is an unstable steady state. On the other hand, if N s = K, we find Ñ(t) Ce rt as t. Hence, Ñ(t) and so N s = K is a stable steady state. In other words, if we start the population near the carrying capacity of the environment, then the population size will tend towards to carrying capacity whereas if we start with a small perturbation then we will no longer remain small. We can actually say more than this for equation (4). Using the method of Separation of variables, we can solve (4) eplicitly for any initial starting population size N() = N N(t) = N Ke rt [K + N (e rt )] = Φt (N ), where again we have defined the evolution operator Φ t that acts on a given starting population size to yield a population size for a given value of t. 7

8 N(t) N K N N(t) =Φ t (N ) N t Figure 2: Logistic population growth for equation (4). We plot the evolution of several population sizes with different N. You should check that N(t) K as t! For the interested student, the book by Murray, Mathematical Biology [3] is an ecellent introduction to the subject of mathematical biology. Summary so far: From Calculus we know how to solve systems of linear ODEs and even some nonlinear ODEs like equation (4) For any given initial condition (() or N()) the solution to these equations are unique, smooth and well behaved. However, the logistic equation (4) is one of the very few nonlinear equations that can be solved eplicitly. Most of the time this is not possible! What happens when we can t solve eplicitly? Eample 3: Discrete Population Dynamics Ok, so we know how to solve the logistic equation (4) and everything is nice, but the population size isn t continuously changing. What happens when we make the change in population size discrete? To do this, we will re-do Eample 2, but replace dn/dt by the finite difference N n+ N n (similar to doing a numerical approimation of dn/dt). Again, a natural law to write down for the rate of change of the population is N n+ N n = births deaths + migration, (6) where N n is the size of the population at time/step n. This equation is called the discrete conservation equation for the population. The simplest possible model has no migration (for eample in a closed lab) and the births and deaths are proportional to the current population size N n e.g., N n+ N n = bn n dn n N n = N ( + b d) n = Φ n (N ), (7) where b, d are positive constants and the initial population is N. Again we can write down the evolution operator Φ n = ( + b d) n that describes the evolution (map) from N to N n for some given n. From (7), we find that the population grows factorially if + b d > otherwise it dies out. Not particularly realistic, as before, since one would epect there to be some sort of self-limiting process stopping the eponential growth when the population becomes too large. A more realistic model is if the number of deaths increased if the population is too large e.g., d = fn n. Now equation (6) becomes N n+ N n = bn n fnn 2 N n+ = N n [ + r r ] K N n, (8) 8

9 where b = r and f = r/k. We may do a little rescaling with N n = [( + r)/r]k n and setting + r = ˆr we get the famous logistic map n+ = ˆr n ( n ). (9) Now, unlike the differential equation version of (9) or (8) there is no eplicit general solution! So what happens? Surely, the dynamics are the same as those for (4) where all the solutions converged to the steady state N = K? Well lets run a Matlab program r = 4 () =.2 for n = 2:5 (n) = r*(n-)*(-(n-)); end plot(); Running this yields the following diagram n Figure 3: Chaotic rabbits? Plot of Matlab computation of n+ = 4 n( n) with =.2. Note that unlike the logistic differential equation (4) there is no convergence to any steady state! Matlab code for figure can be found here: [Chaotic Rabbits] n So by making the population dynamics discrete instead of continuous, we have got some very complicated dynamics; sometimes called chaos. Is this typical? Does it only occur for maps? How can we analyse it? 9

10 2 Mathematical Models Mathematics becomes really powerful when we try to predict the future. For this entire course, we will be solely interested in mathematical models that describe how a (continuous) quantity(s) evolve with time via a set of rules (e.g., Newton s rule for the rate of liquid cooling () or rate of change of population dynamics (2)). These types of mathematical models are called Dynamical Systems. Definition (Dynamical System) A dynamical system is a system whose behaviour can be described by an evolution operator Φ t : X X, defined on a space X for all t T. The space X is called the state space or phase space. The space T, or time, can be R i.e., time is continuous or T = Z in which case time is discrete. The evolution operator must be such that the following two conditions hold for any initial condition X and any t, s T :. Φ ( ) = ( no time, no evolution ), 2. Φ t+s ( ) = Φ t (Φ s ( )) ( determinism ). If T = R, then the dynamical system is a continuous time system, which is usually given as a system of differential equations (e.g. liquid cooling ()) ẋ = f(), () where f is a function defined on X. The evolution operator Φ t described the flow of the vector field. If T = Z then the dynamical system is a discrete system and is defined by a map (e.g., population dynamics (6)) g(), () where g is a function defined on X. The evolution operator is the map g itself and Φ t () = Φ n () = g n (). We can find out what happens to the initial point as time evolves by applying the evolution operator Φ t. As we increase time, we could follow under the flow of () or compute the iterates of by applying the function g in (). If f in () is sufficiently smooth, then we can go back in time and follow the flow backward to see where came from. The entire future and history of a point, {Φ t ( ) t T }, is called the orbit of. For discrete time systems, g must be invertible in order to look at the entire orbit. In this case we will only consider the forward orbit {Φ t ( ) t T, t }. Remark Throughout this course we will always assume that the function f in () is sufficiently smooth so that both the past and future are uniquely determined by the initial condition (see 2nd year Ordinary Differential Equations course). Eample 4: The discrete linear population model (7) N n+ = ( + b d)n n. This is a discrete dynamical system for any b, d R. We may write this as a map N ( + b d)n = g(n) with state space X = R and evolution operator Φ t (N) = Φ n (N) = g n (N). Orbits are given by iterates of the map. Note this map is invertible since for any given N n+ point we can find only one N n point

11 N n g() Figure 4: Figure showing the discrete linear population model (7) is invertible i.e., for a given N n+ iterate, there is only one N n point that is came from. N n Eample 5: The logistic equation n+ = r n ( n ), is a discrete dynamical system for any r R. We may rewrite this as a map r( ) = g(), with state space X = R and evolution operator given by Φ t () = Φ n () = g n (). Orbits are given by iterates of the map. Note this map is not invertible since for a given n+ iterate there may be two n iterates that it could have come from. n ?? Figure 5: Figure showing the non-invertible nature of the logistic map. For a given n+ =.7 there may be two n s that it could have come from. g() n Eample 6: Newton s cooling equation ẋ = α( s ), is a continuous time system for any α, s R. We may write it as a vector field ẋ = f(), where f() = α( s ) with phase space X = R. Note that f is smooth (in fact smooth and differentiable) and so solutions are uniquely determined in both forward and backward time. The evolution operator is given by Φ t ( ) = ( s )e αt + s.

12 Eample 7: The planar pendulum equation Ml θ + c θ + Mg sin θ =, where M is the mass, l is the length of the pendulum, g gravitational constant and c the damping coefficient. By rescaling time t l/gt, defining = θ, y = l/g θ and letting D = c/m gl, yields θ l θ Mgsinθ Mg M Figure 6: Diagram of the planar pendulum the equations of motion ẋ = y, ẏ = Dy sin. Since = θ is the angular displacement from the vertical down position, this variable is periodic with period 2π implying the phase space X is a cylinder. Alternatively, we can assume X = R 2 and we will see the periodicity in in the solutions. Given initial conditions on the initial angular position θ() and initial angular velocity θ(), solutions to the pendulum equations are uniquely determined for all time. Eample 8: The Lorenz system ẋ = σ(y ), (2) ẏ = ρ y z, (3) ż = βz + y, (4) is a three-dimensional continuous dynamical system. Even though we cannot eplicitly solve the system of ODEs, the solutions are uniquely determined in both forward and backward time by the initial condition (, y, z ) R 3 in the phase space X = R 3. Eample 9: heat equation Partial differential equations may also be considered as dynamical systems e.g., the u t = 2 u, u(, ) = g(). (5) The heat equation can be thought as a continuous dynamical system, T = R, where the phase space X is a function (Banach) space e.g., X = C 2 (R). In this case, the phase space also includes any boundary conditions. One can also define an evolution operator which describes how the initial condition u(, ) = g() evolves. Everything that can happen in ODEs can also happen in PDEs but PDEs have far more degrees of freedom and so there is the possibility of far more complicated dynamics! For those that are interested in dynamical systems methods applied to PDEs, see Nonlinear Patterns (MAT322). 2

13 3 Key types of Solutions From our early investigations into liquid cooling and population dynamics, it is clear there is an initial transient part of the dynamics and then the dynamics settles down to, usually, three key types of solutions:. steady states, 2. periodic orbits and, 3. chaotic dynamics. When dealing with nonlinear dynamical systems, we are less interested in the initial transient phase and more in the long-time dynamics as these will govern how the system will evolve i.e., will you evolve to a steady state or a periodic orbit? One doesn t often really care how you got there. It is also etremely rare to be able to solve a nonlinear dynamical system analytically. Hence, we are less interested in precisely what the solution is doing compared to what the solution qualitatively (topologically) looks like. The three types of solutions define three qualitative types of possible dynamics (the list above is be no means ehaustive!). We then classify regions of parameter space where the solutions look qualitatively similar i.e., steady states, periodic orbits, or chaotic dynamics. The first part of this course is concerned about how one can go from one type of qualitative behaviour to another. 3. Steady states For ODEs, we define steady states (equilibria) to be where ẋ = i.e., the state does not change/evolve over time. An equilibrium can be found by solving the nonlinear problem where f is the righthand-side of equation (). f( ) =, (6) For maps, we define steady states (fied point) to be where n+ = n i.e., the state does not change/evolve over time. A fied point can be found by solve the nonlinear problem where f is the righthand-side of equation (). = f( ), (7) In both cases, one has to solve a nonlinear algebraic problem. While solving either (6) or (7) may seem to be significantly easier than finding a solution for the general dynamics of () or (), even this may be impossible to do! Solution strategies for solving (6) or (7): Evolve the dynamical system and hope you converge to a steady state. Problem: there might not be a steady state or you might converge to something that isn t a steady state... If the phase space is one-dimensional i.e., X = R, then one can look for steady states via graphical inspection by:. plotting f() and looking for zeros of f in the case of ODEs, or 2. plotting f() and looking for zero crossings in the case of maps. See eample sheet for eamples of this. If the phase space is higher-dimensional, one can guess a steady state and use (a globalised) Newton s method to solve (6) or (7). matlab s fsolve routine in the optimisation toolbo will do this. Problem: no guarantee of convergence to a steady state... We will discuss steady states in more detail in the following sections, starting by understanding steady states in ODEs. 3

14 3.2 Periodic orbits Periodic orbits of dynamical systems are the simplest non-trivial evolving solutions. For ODEs (), we define a periodic orbit as follows: For a point X there eists τ R with τ > such that Φ τ ( ) =. The periodic orbit is defined as the closed curve {Φ t ( ) t τ }, where τ > is the smallest number τ such that Φ τ ( ) =. τ is called the period of the periodic orbit. For maps (), we define a periodic orbit as follows: For a point X there eists n Z with n > such that g n ( ) =, while g i ( ) for all < i < n. The periodic orbit is defined as the set of n points {g i ( ) i < n}, and n is called the period of the periodic orbit. Finding periodic orbits is even harder than finding steady states and so one usually has to resort to numerical approimations. 3.3 Chaotic dynamics Now this is where things start getting really interesting! Mathematicians can not even agree on what eactly chaos is. However, we are going to go for the following (vague) definition Definition 2 (Chaos) Apparent stochastic (random) behaviour occurring in a deterministic system. Logistic Map Lorenz Equations n n Original initial condition Original initial condition + small perturbation Figure 7: Eamples of chaotic orbits in the logistic map and the Lorenz equations. In both systems, we have started the systems off with an initial condition and let it run. Then we did the same computation but this time with a very small perturbation to the initial condition ( 6 ). In both cases we see that the slightly perturbed initial condition evolves forward in time, initially staying close to the unperturbed evolution. Eventually though the orbits become very different! Matlab code: [Sensitive dependence on initial conditions] and [lorenz.m] A dynamical system is chaotic if (a subset of the) orbits are confined to a bounded region, but behave unpredictably (randomly). We have already seen one eample of a chaotic system in the form of the logistic map n+ = 4 n ( n ). Another eample is the Lorenz equations (4) with 4

15 σ =, ρ = 28, β = 8/3; see Figure 7. In this case, arbitrary orbits seem to accumulate on an object called the butterfly attractor. To demonstrate this unpredictability, we can start the chaotic dynamical system from an initial condition, evolve it for some long time and remember the solution (orbit). Now, if the system was predictable, we could re-start the dynamical system from the same initial condition plus a very small perturbation and we would find that the new solution (orbit) would follow roughly the old orbit. But in chaotic systems what we observe is the following. We see that they behave similarly; in the case of the logistic map the orbits stay bounded between [, ] and in the Lorenz equations the orbits trace the butterfly attractor. On the other hand, one would epect that two nearby initial conditions to trace out similar paths for all time, or at least for a very long time; e.g., the number of turns in one of the butterfly wings to be followed by the same number of turns in the other wing. For chaotic systems this is not true as the orbits very quickly behave differently and there is no memory of the the fact that the initial conditions were once very close together. This property is known as sensitive dependence of initial conditions. In other words, the precision of your initial condition i.e., the number of decimal places, matters! However, the logistic map (and the Lorenz equations) aren t always chaotic. For eample, if we change to parameter r to r = 3. and evolve the logistic map n+ = r n ( n ) then we find a periodic orbit; see Figure 8. n Figure 8: Evolution of the logistic map n+ = 3. n( n) starting with =.2. The orbit converges to a periodic orbit with iterates,,...,.765,.558,.765,.558,.765,.558,..., n. Matlab code: [Periodic orbit of Logistic map] n So we have just changed a parameter from 4 to 3. (not a big change) and we have found very predictable, regular behaviour. How to we know if a system is going to be regular or unpredictable? Necessary conditions for chaos:. either a one- (or more) dimensional non-invertible iterated map (e.g. Logistic map), or a two- (or more) dimensional invertible iterated map (e.g. Henon map), or a three- (or more) dimensional system of (first order) differential equations (e.g. Lorenz equations). 2. nonlinearity! So for eample, the planar pendulum cannot be chaotic since it is only a system of two first-order ODEs even though it is nonlinear. Some eamples of differential equations that do have chaotic dynamics are: 5

16 θ l M θ 2 l 2 M 2 Eample : Double Pendulum An etension of the planar pendulum is the double pendulum. Here we connect two planar pendulums together. Ignoring friction, we can write down the equations of motion for this problem using Lagrangian dynamics. This yields the system of equations (M + M 2 )l θ + M 2 l 2 θ2 cos(θ θ 2 ) + M 2 l 2 θ2 2 sin(θ θ 2 ) + g(m + M 2 ) sin θ =, M 2 l 2 θ2 + M 2 l θ cos(θ θ 2 ) M 2 l θ2 sin(θ θ 2 ) + M 2 g sin θ 2 = This system can be re-written as a first order ODE system of the form θ = φ, φ = f (θ, θ 2, θ 2 ), θ 2 = φ 2, φ 2 = f 2 (θ, θ 2, θ ) Hence, we have enough dimensions and all the system is highly nonlinear. The dynamics of the double pendulum can be seen in this movie on the Chaos & Fractals website: [Double Pendulum]. Eample : Chaotic Waterwheel The Chaotic Waterwheel shown in the movie on the Chaos & Fractals website: [Chaotic Waterwheel], is modelled by the Lorenz equation (4); see also Chapter 9. of Nonlinear Dynamics and Chaos by Steven Strogatz. In the eperiment, coloured water is pumped into chambers (with holes in) in a cylinder that rotates. If the brake is set correctly, the waterwheel will spin left and right chaotically. 6

17 4 Phase portraits and topological equivalence: flows Before we can try to understand chaotic dynamics, we will start by understanding predictable dynamics. We just want to know roughly what the orbits of dynamical systems look like i.e., just draw sketches. In particular, we are interested in what happens to the solutions as t. For eample, the sketch of an orbit of a dynamical system going to an equilibrium should look very different to the sketch of a dynamical system escaping to infinity. However, we want the sketches of the solutions going to the same equilibrium with different initial conditions to look the same. To do this, we will use the idea of topological equivalence. You will have come across the idea of phase portraits in your 2nd year Ordinary Differential equations course. 4. D Phase portraits Consider the linear one-dimensional ODE ẋ = α (8) with R and a parameter α R. Solutions of (8) are simply (t) = e αt () and are uniquely determined by the choice of initial condition. In particular, we have the following qualitatively similar solutions If α >, (t) ± as t ((t) escapes to + if () > and if () < ) If α =, (t) = () for all t If α <, (t) as t. The case α = is unusual and we will forget it here. = is an equilibrium solution of equation (8). The phase portraits for (8) are shown in Figure 9 For all α <, the phase portraits all look the α< α> Figure 9: Phase portraits of (8). The equilibrium = is an attractor for α < (i.e., all initial conditions are converge to zero) and = is a repeller for α > (i.e., all initial conditions ecept () = escape to ± ). same and similarly for all α >. In general for D ODEs of the form ẋ = f(), one can quickly plot the phase portrait by plotting f() and everywhere f() > the derivative ẋ > and hence the flow moves to the right, while if f() < the derivative ẋ < and the flow moves to the left. Eample 2: The phase portrait for ẋ = sin, looks like Note that the phase portrait for ẋ = sin near = and = 2π looks qualitatively similar to that for α > in Figure 9. Similarly, near = π and = π the phase portraits look qualitatively similar to that for α < in Figure 9. We call these two systems topologically equivalent in these respective neighbourhoods. 7

18 - f()= sin f()> f()> f()> f()> f()< -π f()< π f()< f()< Definition 3 (Topological Equivalence) Suppose we have two vector fields ẋ = f(), (9) ẏ = g(y). (2) Then (9) on domain U is topologically equivalent to (2) on a domain V, if we can find a continuous and invertible map (i.e., a homeomorphism) h : U V that maps orbits of (9) to orbits of (2), respecting the direction of time. So in their respective domains U and V, the vector fields have the same number of equilibria (one in our eample) and identical phase portraits D Phase portraits We consider general autonomous two-dimensional dynamical systems of the form ẋ = f(, y) ẏ = g(, y). (2) Recall that an autonomous ordinary differential system is one where the right-hand-side does not depend eplicitly on time. Eample 3: Planar pendulum system ẋ = y =: f(, y), where = θ (angular position), D = c/m gl. ẏ = Dy sin =: g(, y), Eample 4: Competing Species model Population dynamics for two competing species and y (that define fractions of an epected population size) for the same limited food resource such that they inhibit each others growth ẋ = ( αy) =: f(, y), (22) ẏ = λy( y β) =: g(, y). (23) So the first species would be considered healthy if =, and similarly for the second species y = is a healthy population size. If either species is etinct, then the other species is governed by the logistic equation (4). Eample 5: Harmonic oscillator Harmonic oscillator with a friction term. By defining y = ẋ we obtain the two-dimensional autonomous dynamical system ẋ = y, ẏ = cy ω 2. 8

19 ẍ + cẋ + ω 2 = Spring M The phase curves or phase trajectories of (2) are solutions of d dy = f(, y) g(, y). (24) Through any point (, y ) there is a unique curve ecept at equilibria ( s, y s ) where f( s, y s ) = g( s, y s ) =. Transform s, y y y s then (, ) is an equilibrium of the transformed equation. Thus, without loss of generality, we now consider (24) at the equilibrium (, ); that is f(, y) = g(, y) = =, y =. We epand both f and g is a Taylor series (using the fact that we have assumed enough smoothness, analytically to be precise), and retaining only the linear terms, we get ( ) ( ) d a + by a b = dy c + dy, A = f f = y c d g g y, (,) that defines the matri A and the constants a, b, c, and d. The linear form is equivalent to the system ẋ = a + by, ẏ = c + dy, (25) whose solutions give parametric forms of the phase curves where t is the parametric parameter. Let λ + and λ be the eigenvalues of A a λ b c d λ = λ ± = 2 (a + d ± [(a + d)2 4detA] /2 ). Solutions of (25) are then ( y ) = c v + e λ+t + c 2 v e λ t, (26) where c and c 2 are arbitrary constants and v + and v are the eigenvectors of A corresponding to λ + and λ respectively. The form (26) is for distinct eigenvalues. If the eigenvalues are equal, the solutions are proportional to (c + c 2 t)e λt. We can separate the phase portraits into three classes: Attractors: Here both the eigenvalues have negative real part. If both eigenvalues are real and negative then the phase portraits look qualitatively similar to (a) C Eigenvalues Phase Portrait y 9

20 If the eigenvalues are comple with negative real part then the phase portrait looks like a spiral (b) C Eigenvalues Phase Portrait. y >. y< To work out which way the spiral goes for a given system, just away from the origin calculate ẋ and ẏ. In the above phase portrait, I calculated the sign of ẋ and ẏ at the yellow dot and following this initial direction I can trace out the spiral. Repellers: Here both the eigenvalues have postive real part. If both eigenvalues are real and postive then the phase portraits look qualitatively similar to (c) C Eigenvalues Phase Portrait y If the eigenvalues are comple with positive real part then the phase portrait looks like spirals (d) C Eigenvalues Phase Portrait y Saddles: Here one eigenvalue is positive and the other is negative. The phase portrait qualitatively looks like (e) C Eigenvalues Phase Portrait y If there are no eigenvalues with zero real part, then we can say that the two phase portraits for the attractors are both qualitatively the same (similarly for the repellers) Theorem 4. (Topological Equivalence for linear flows) Consider the two linear vector fields ẋ = A, (27) ẋ = B, (28) 2

21 where A and B are n n matrices. If n (A) = n (B) =, that is the origin is a hyperbolic equilibrium for both systems, then (27) and (28) are topologically equivalent if and only if n + (A) = n + (B) and consequently, also n (A) = n (B). This theorem is true for general linear systems of size n. 4.3 Stability and Eigenvalues In general most vector fields will be high dimensional ẋ = f(), R n, f : R n R n, (29) for some n with f being nonlinear. Suppose we know an equilibrium s that is f( s ) =. (3) We wish to find out if we start near the equilibrium state whether or not the solution (t) converges to s as t i.e., s is a stable steady state (see Figure??(a) and (b)). To do this, we consider the evolution of (t) = s + (t) (3) where (t) is assumed to be a small perturbation of the steady state s. Now substituting (3) into the ODE (29) yields since s is a steady state with ẋ s =. d[ s + (t)] dt d s dt + d dt d dt = f( s + (t)), = f( s + (t)), = f( s + (t)), (32) Now we epand f( s + (t)) in a Taylor series remembering that (t) is small where J() is the Jacobian n n matri of f defined as J( s ) = f.. f n f( s + ) = f( s ) + J( s ) (t) + (33) f n.. f n n = s, where =.. n, f() = Substituting the Taylor series for f (33) for the right hand side in (32) yields f.. f n. d dt = f( s ) + J( s ) (t) +, = J( s ) (t) +, since f( s ) =, J( s ) (t), ignoring higher order terms in (t). This process of ignoring higher order (nonlinear) terms in (t) is called linearisation. The stability of the equilibrium s is governed by the linear ODE system d dt = J( s) (t), (34) whose solution is (t) = ()e J(s)t (see first year Linear Algebra). The eigenvalues of the matri J( s ) are important for determining the stability properties of the equilibrium s. We define the following three numbers 2

22 n = number of eigenvalues of J( s ) with zero real part, n + = number of eigenvalues of J( s ) with positive real part, n = number of eigenvalues of J( s ) with negative real part. Definition 4 An equilibrium s is called hyperbolic if n =. We call a hyperbolic equilibrium s an attractor if n = n and n + =, a repeller if n + = n and n =, and a saddle point both if n + > and n >. Attractors have the property (t) as t i.e., (t) = s + (t) s as t so all initial starting points near s converge to s and the equilibrium is said to be stable. On the other hand, repellers have (t) as t then (t) = s + (t) as t and so the steady state s is said to be unstable i.e., all initial starting points near s diverge from s. Of course, if (t) becomes large then this violates our Taylor series epansion requiring us to take into account higher order terms far from s. When does linearising about s tell you the stability properties of s? The Hartman & Grobmann theorem tells you when the linearised system governs the stability of the equilibrium. Theorem 4.2 (Hartman & Grobmann) If the vector field ẋ = f(), (35) has a hyperbolic equilibrium s, then there eists a neighbourhood U of s such that (35) on U is topologically equivalent to the linearised system on an (arbitrary) neighbourhood V of the origin. = J( s ) Eample 6: The system ẋ = sin near = π is topologically equivalent to the system ( ) d ẋ = d sin = (cos π) =. =π Eample 7: Harmonic oscillator Equilibria are given by and the Jacobian matri is ẋ = y, ẏ = Dy sin. y = and sin = = kπ, k Z J ( y ) ( = cos D Hence, for the equilibria ( s, y s ) = (2kπ, ) that matri becomes ( ) ( ) 2kπ J =, D ). with eigenvalues λ ± = D 2 ± 2 D2 4. Now 22

23 if D =, λ ± = ±2i and so the equilibrium in not hyperbolic i.e., n = 2, n + =, n =. if < D < 2, λ ± are comple conjugate with negative real parts and the equilibrium is hyperbolic and an attractor i.e., n =, n + =, n = 2. if 2 < D <, λ ± are comple conjugate with positive real parts and the equilibrium is hyperbolic and a repeller i.e., n =, n + = 2, n =. For the equilibria ( s, y s ) = (2kπ + π, ) the matri becomes ( ) ( 2kπ + π J = D with eigenvalues λ ± = D 2 ± 2 D Hence, one eigenvalue is always positive and the other is negative equilibria is a saddle point. ), 4.4 Stable and Unstable Manifolds Linearisation tells us what happens to the dynamics locally (near) each hyperbolic equilibrium. However, we would like to know what happens globally to the dynamics i.e., how do we travel from near one equilibrium to another equilibrium? An important global object that will allow us to paste together local information near each equilibrium to obtain the entire phase portrait is the following. Definition 5 Let s be a saddle point of the vector field ẋ = f(). The set of all points ending up at s under the flow of Φ t of the vector field W s ( s ) = { R n Φ t ( s ) s as t }, is called the stable manifold of s. Similarly, all points coming from s is called the unstable manifold of s. W u ( s ) = { R n Φ t ( s ) s as t }, The stable and unstable manifolds of saddle points in two-dimensional vector fields are one-dimensional curves. In general, the dimension of the unstable manifold is equal to n + and the dimension of the stable manifold is n. It is not easy to find stable and unstable manifolds; it is not possible, in general, to compute them analytically. However, we do know the following theorem. Theorem 4.3 Let s be a saddle point of ẋ = f(). Then is also a saddle point of the linearised system ẋ = J( s )( s ). Let E s ( s ) and E u ( s ) denote the stable and unstable eigenspaces of the linearised system. In a small enough neighbourhood U of s, there eists a local piece of W s ( s ), that is a smooth manifold that is a graph of some function h : E s ( s ) E u ( s ). Furthermore, W loc s ( s) is tangent to E s ( s ) at s. The same is true for the local unstable manifold W loc u ( s) which is defined in the same way. Pictorially this looks like Figure. Eample 8: To illustrate global (un)stable manifolds and how to obtain the entire phase portrait let us look again at the planar pendulum ẋ = y, ẏ = Dy sin. Remember that = θ the angular displacement from the vertical down position and so is periodic with period 2π. The parameter D is proportional to the amount of friction in the system i.e., no friction D =, friction D >. We have already found the equilibria ( s, y s ) = (kπ, ) and stability 23

24 E s ( s ) E u ( s ) W u loc ( s) W s loc ( s) Figure : Diagram of stable and unstable manifolds near a saddle point. if ( s, y s ) = (2kπ, ): the eigenvalues are ±2i for D = (no friction), and comple conjugate with negative real part for D small (small friction). The equilibria are spiral attractors. if ( s, y s ) = (2kπ + π, ), then one eigenvalue is always positive with the other always negative and are saddle points with eigenvectors [, ] T and [, ] T if D =. The theory does not allow us to say anything about the equilibrium ( s, y s ) = (2kπ, ) for D = since the equilibria are non-hyperbolic. However, we know that a pendulum without friction will have periodic motion about the vertical down position. Hence, near the equilibria, the dynamics become No Friction: D = -π π 2π Friction: D > -π π 2π Figure : Phase portraits near the equilibria of the pendulum equation If D =, then the periodic motion where the pendulum swings back and forth is separated from the periodic motion (the pendulum going over the top) by stable and unstable manifolds. If the pendulum is (almost) eactly up (unstable situation) it will fall down and again end up (almost) eactly up. From this, the stable and unstable manifolds connect up as shown in Figure 2. Note that there is a major change between the phase portraits for D = (no friction) and D > (friction), but all friction phase portraits look qualitatively (topologically) the same for D >. This major change in the phase portrait structure, is called a Bifurcation. 24

25 y 3 2 (a) W s (s) Pendulum with no friction (a) Pendulum goes over the top counter-clockwise (b) Pendulum does not go over the top (b) (c) Pendulum goes over the top clockwise (c) W u (s) Pendulum with friction All typical initial conditions 2 end up in the rest position y Figure 2: Global pendulum phase portrait with and without friction. Note, in the presence of friction, practically all initial conditions end up at the vertical down position. The eception is formed by the stable manifolds, consisting of points that end up in the vertical up position. Maltab code: [pplane7.m] 5 Bifurcations: flows Most models of dynamical systems contain parameters that we can either vary in an eperiment e.g., the carrying capacity K in the single species population model, or the parameters account from some uncertainty of the modelling. For instance, we do not know the eact value of friction in the planar pendulum model, but we know that one eists. Hence the dynamical systems models are of the general form ẋ = f(, λ), R n, and λ R m. The value of λ may lie in some interval, or region. Without knowing eactly what λ is, what can we say about the dynamical system? We would like to separate out dynamical systems that behave qualitatively (topologically) the same e.g., in the competing species model ẋ = ( αy) =: f(, y), ẏ = λy( y β) =: g(, y). for what parameters (α, β) causes the etinction of a species and for what parameters do the species live happily together? The first thing to note is that the equilibria of the vector field ẋ = f(, λ), R n, and λ R m. 25

26 will typically change as λ is changed i.e., s (λ) is a function of λ. Eample 9: In the single species population model dn dt = rn( N/K) = f(n, r, K), the parameters here are r and K the carrying capacity of the environment. The equilibria are found by solving f(n s, r, K) = for N s where we find that N s = and N s = K. The second of these equilibria N s = N s (K) = K changes as K changes i.e., it is a function of K. Eample 2: Consider the predator-prey model for sharks and fish F = p F (F ) F S p 2 ( e p3f ), (36) Ṡ = S + p 4 F S, (37) where F stands for the fraction of fish (between and ) and S is the fraction of sharks (also between and ). The p F (F ) term is the growth of fish similar to that for the single species population model. The F S term is sharks hunting fish and the p 2 ( e p3f ) term is people fishing. We have a fishing quota parameter p 2 which we shall vary (we would not want the fish to become etinct) and the eponential term describes the fact that it is very hard to catch fish if the population is very small. The growth term in the second equation p 4 F S describes that the number of sharks is limited by the number of fish. Finally, the S term is the death of sharks. We have the parameters: p - growth/birth rate of fish p 2 - fish quota for people p 3 - unknown parameter describing how hard it is to catch fish when the population is small p 4 - how the sharks are limited by the number of fish Let us find a equilibrium of this model i.e., we solve = p F (F ) F S p 2 ( e p3f ), = S + p 4 F S, for F and S. For one equilibrium we get F = p 4, S = p p 4 p p 2 p p 2 e p3/p4 p 2 4 p 4. So the equilibria depend on the parameters p, p 2, p 3 and p 4. Definition 6 (Bifurcation) Consider the vector field ẋ = f(, λ), R n, and λ R m. A bifurcation occurs at a parameter value λ = λ b if for parameter values λ arbitrarily close, the phase portraits of the system are not topologically equivalent to those at λ = λ b. There are two main classes of bifurcations; local bifurcations and global bifurcations. Definition 7 (Local Bifurcation) Consider the vector field ẋ = f(, λ), R n, and λ R m. A local bifurcation occurs at ( b, λ b ) where ( b, λ b ) is an equilibrium i.e. f( b, λ b ) = and the Jacobian J( b, λ b ) has at least one eigenvalue with zero real part. The phase portrait of the system is qualitatively (topologically) different for λ < λ b and λ > λ b. 26

27 There are many different types of bifurcations. We will being by classifying a few different types of bifurcation that involve equilibria. These bifurcations are classified by the eigenvalues of the Jacobian matri associated with the parameter dependent equilibrium. At the bifurcation point λ = λ b, n and the equilibrium is no longer hyperbolic. However, just before and after the bifurcation point, the equilibrium is hyperbolic and we can use the Hartman & Grobmann theorem. Global bifurcations occur when larger invariant sets (e.g., periodic orbits) collide with equilibria. The change in the topology of the phase portraits is not confined to a local neighbourhood as is the case of local bifurcations. We will discuss later eamples of global bifurcations. 5. Saddle-Node Bifurcation The saddle-node bifurcation in a system sees two equilibria on one side of the bifurcation but no equilibria on the other side as a single parameter is varied. At the bifurcation point, the two equilibria come together and collide, annihilating each other. The bifurcation is sometimes also called a fold bifurcation, limit bifurcation, or turning point bifurcation. Eample 2: Consider the vector field ẋ = f(, λ) = λ 2, where, λ R. Solving f(, λ) =, we find two equilibria ± = ± λ. These two equilibria only eist in R if λ >. The Jacobian matri and so we have J(, λ) = 2 at + = + λ, the eigenvalue of the Jacobian matri is 2 λ <. So this equilibrium is an attractor. at = λ, the eigenvalue of the Jacobian matri is 2 λ >. So this equilibrium is a repeller. There are several ways to visualise this bifurcation. One way would be to draw all the different topologically different phase portraits. However, we can combine all this pictures together to create λ< λ= λ> λ λ one diagram, called a bifurcation diagram by plotting a graph with the parameter on the horizontal ais and the phase space on the vertical ais. Taking vertical slices of the bifurcation diagram and rotating the picture by 9 degrees yield the phase portraits shown above. Similar to topologically equivalent phase portraits, we would like to identify qualitatively similar bifurcations. Definition 8 (Normal forms) The (topological) normal form ẏ = g(y, λ) of a vector field ẋ = f(, λ) is the simplest differential equation that captures the essential features of a system near a bifurcation point i.e., in the neighbourhood of some point ( b, λ b ). The bifurcation diagrams of both dynamical systems are topologically the same. 27

28 = λ λ = λ So the phase portraits of the normal form are topologically equivalent to those of the original dynamical system near the bifurcation point. Theorem 5. (Saddle-node bifurcation) Let with, λ R. If the following conditions hold: (B) f( b, λ b ) = Equilibrium at b for λ = λ b, (B2) J( b, λ b ) = Zero eigenvalue at b for λ = λ b, ẋ = f(, λ), (38) (G) d 2 d 2 f( b, λ b ) Second order term of f does not vanish at equilibrium, (G2) d dλ f( b, λ b ) Positive speed in λ, then (38) has the topological normal form in a neighbourhood of ( b, λ b ). ẏ = λ ± y 2, To make things more simple, the equilibrium ( b, λ b ) is shifted to the origin (, λ) = (, ). The conditions (B) and (B2) are called bifurcation conditions and must be satisfied for the bifurcation to occur. (G) and (G2) are called genericity conditions and they are usually satisfied (but must be checked as well!). Eample 22: Consider the vector field ẋ = α sin = f(, α), where, α R. Equilibria are found by solving f(, α) =, i.e., α = sin : graphically these can be found by looking at intersections of the curves y = α and y = sin. A possible candidate for a a saddle-node bifurcation is α = α b = and = b = π/2. Check conditions: (B) f( b, λ b ) = sin(π/2) =, yep (B2) J( b, λ b ) = f (α b, b ) = cos b = cos(π/2) =, yep (G) d 2 d 2 f( b, λ b ) = sin b = sin(π/2) = +, yep (G2) d dλ f( b, λ b ) = +, yep. 28

29 sin - Candidates for saddle-node bifurcation Figure 3: Graphical picture of the equilibria α = sin. The dashed lines correspond to α = + and α =. Varying α corresponds to moving these lines up or down. Equilibria correspond to where the dashed line intersects the sin curve e.g, α = curve is just the -ais and the equilibria are shown as gold circles. Note that at α = + and α = we infinitely many collisions of pairs of equilibria. For α > we have no equilibria i.e, at α = we have a saddle-node bifurcation. π 2 α Figure 4: Bifurcation diagram for ẋ = α sin near the bifurcation point (α b, b ) = (, π/2). So at (α b, b ) = (, π/2) we satisfy the conditions in the Saddle-node theorem and so we have a saddle-node bifurcation at this point. The bifurcation diagram looks like This picture looks the same as the bifurcation diagram for ẏ = λ + 2 y2 near the bifurcation point (α b, b ) = (, π/2). Another way to see this is by first shifting + π/2 to the origin and using the Taylor series epansion for sin( + π/2) = cos about = [ ẋ = α ] = (α ) Now letting α = λ and = y we see this is the same locally near the bifurcation point ( b, α b ) = ( π 2, ) as ẏ = λ + 2 y Transcritical bifurcation In a transcritical bifurcation, two equilibria pass through each other and echange their stability properties at the point where they collide. This bifurcation only occurs in systems where the equilibria eist for all parameter values. Theorem 5.2 (Transcritical bifurcation) Let ẋ = f(, λ), (39) with, λ R. If the following conditions hold: (B) f( b, λ b ) = for all λ b R, 29

30 λ Figure 5: Bifurcation diagram for the transcritical normal form ẏ = λy y 2. (B2) J( b, λ b ) = Zero eigenvalue at b for λ = λ b, (G) d 2 d 2 f( b, λ b ) Second order term of f does not vanish at equilibrium, (G2) d dλ f ( b, λ b ) Positive speed in λ, then (39) has the topological normal form in a neighbourhood of ( b, λ b ). Eample 23: ẏ = λy ± y 2, In the single species population model dn dt = rn( N/K) = αn βn 2, for α, β, N R, we see that this is topologically equivalent to the transcritical bifurcation normal form if β and there is a transcritical bifurcation at (N b, α) = (, ). 5.3 Pitchfork bifurcation If the dynamical system has a reflectional symmetry, then the usual bifurcation that will occur is a pitchfork bifurcation with normal form ẋ = λ ± 3. This system has reflectional symmetry about { = } i.e., if you replace with then you get the same equation as before. Theorem 5.3 (Pitchfork bifurcation) Let ẋ = f(, λ), with f(, λ) = f(, λ) reflectional symmetry with, λ R. If the following conditions are met: (B) f( b, λ b ) =, (B2) J( b, λ b ) =, (G) d 3 d 3 f( b, λ b ) third order term of f does not vanish at equilibrium, (G2) d dλ f ( b, λ b ) Positive speed in λ, then (38) has the topological normal form in a neighbourhood of ( b, λ b ). ẏ = λy ± y 3, When the cubic term is 3, we call the pitchfork bifurcation supercritical; when the cubic term is + 3, the bifurcation is called subcritical. 3

31 Supercritical pitchfork bifurcation Subcritical pitchfork bifurcation ẋ = λ 3 ẋ = λ + 3 = λ = λ λ λ = λ = λ 5.4 Hopf bifurcation The creation of small amplitude periodic orbits from an equilibrium is called a Hopf Bifurcation. This usually occurs when a comple conjugated pair of eigenvalues pass through the imaginary ais. The normal form for this bifurcation is ż = (λ + iω)z + α z 2 z, comple notation. (4) We may rewrite this equation in real variables, y R by letting z = + iy [ ] [ ] [ ] [ ] ẋ λ ω = + α( 2 + y 2 ) real notation. (4) ẏ ω λ y y or in polar coordinates z = re iφ ṙ = r(λ + αr 2 ), φ = ω. (42) From (4), we that that if we linearise about (, y) = (, ) then we have the system [ ] [ ] [ ] [ ] ẋ λ ω = = A. ẏ ω λ y y The eigenvalues of A are λ±iω and so we require ω for the eigenvalues to be comple conjugate. They both pass through the imaginary ais at λ =. It is easiest to analyse what happens in this system in the polar form (42) since the two equations are uncoupled. Note that the phase space for (42) is the positive half cylinder r, π φ π. Also, this system does not have any equilibria since φ is continuously varying with constant speed ω i.e., φ(t) = ωt + φ(). Hence, everything flows to the right. Lets first consider the case α =, ω >. The equation ṙ = r(λ r 2 ) has three equilbria solutions, namely r = and r = ± λ. From this the following phase portraits are For λ >, we see the emergence of a periodic orbit. At the bifurcation point ( b, λ b ) the amplitude of the periodic orbit is and grows as λ. The frequency of the periodic orbit at the bifurcation point is equal to ω, the absolute value of the imaginary part of the eigenvalue of the equilibrium point at bifurcation. The equilibrium eists for λ > but is unstable while the periodic orbit is stable. Note that if we ignore φ, then for the system in polar coordinates (42) the ṙ = λr r 2 is the equation for a pitchfork bifurcation with corresponding diagram. In the full parameter/ phase space R R 2 for the system of (4) the bifurcation diagram is a three-dimensional picture. Theorem 5.4 (Hopf bifurcation) Let ẋ = f(, λ), R 2, λ R. (43) 3

32 r λ< λ= λ> ``Slowly attracting" λ φ 2π 2π 2π In real (,y) space: y r r = λ λ λ r = λ (,y,λ)-plane (r,λ)-plane Suppose (43) has a equilibrium (λ) for λ near λ b with eigenvalues of the Jacobian matri J((λ), λ) If the following conditions hold (B) f( b, λ b ) =, (that is (λ b ) = b ) η ± = α(λ) ± iβ(λ). (B2) J( b, λ b ) = ±iω, i.e., a pair of imaginary eigenvalues (α(λ b ) =, β(λ b ) = ω) (G) l ( b, λ b ), where l is the first Lyapunov quantity and is computed from f (If you need it you should look it up in a book!) (G2) d dλ α(λ b) then (43) has the topological normal form in a neighbourhood of ( b, λ b ). ż = (λ + iω)z + l ( n, λ b ) z 2 z The sign of the first Lyapunov quantity determines the stability properties of the periodic orbit that appears in the bifurcation. We have the following overview Eample 24: Brusselator The Brusselator models an autocatalytic, oscillating chemical reaction. An autocatlytic reaction is one in which a chemical (c.f. species) acts to increase the rate of its producing reaction. In many autocatlytic systems comple dynamics are seen, including multiple 32

33 ``Supercritical" ``Harmonic oscillator" ``Subcritical" l ( b, λ b ) < l ( b, λ b )= l ( b, λ b ) > λ< λ= λ> steady-states and periodic orbits; see the Chemical Oscillations movie on the Chaos & Fractals website [Chemical Oscillations]. The Brusselator system is given by the following system of equations ẋ = A + 2 y (B + ), ẏ = B 2 y, (44) where (, y) R 2 are two chemicals and A, B R are real parameters controlling the rates of the two chemicals. Let us fi A =. Equilibria correspond to when the amounts of the two chemicals does not change. They are found by solving We find the Jacobian matri is + 2 y (B + ) =, B 2 y =, =, y = B. J(, y) = [ ] 2y (B + ) 2 B 2y 2, evaluating the Jacobian matri at the equilibrium (, y) = (, B) yields [ ] B J(A, B/A) =, B with eigenvalues η ± = 2 B ± 2 B(B 4). Hence, at B = 2, the eigenvalues are ±i and we have a candidate for a Hopf bifurcation. We have already checked conditions (B) and (B2). Condition (G2) is fulfilled since d db ( 2 B ) = 2. The hard condition to check is (G) and this requires transforming the system into the normal form, equations (4). However, if we numerically integrate (44) for B = 2., we find a stable periodic orbit implying that the first Lyapunov quantity is l <. 5.5 Global bifurcations From a Hopf bifurcation small amplitude oscillations (periodic orbits) emerge from an equilibrium, but what happens to large amplitude periodic orbits? Well many things are possible; saddle-node bifurcation of periodic orbits, periodic orbit collides with a saddle point (homoclinic bifurcation), or a period double bifurcation. 33

34 y periodic orbit Figure 6: Convergence to the periodic orbit of the Brusselator. Matlab code: [Brusselator] and [brusselator.m] 5.5. Homoclinic bifurcation A homoclinic bifurcation occurs when a periodic orbit collides with a saddle point. The bifurcation is easier to understand in an eample. Eample 25: Homoclinic Bifurcation Consider the planar vector field ẋ = y, ẏ = λy y (45) The homoclinic bifurcation in this system is found to numerically occur at λ = λ h For λ < λ h, we have a stable periodic orbit. Note the stable and unstable manifolds of the saddle point do not coincide with each other. As λ λ h from below, the periodic orbit collides with the saddle point, with the orbit spending more and more time at the saddle point. The period of the orbit scales like T ln(λ λ h ) which tends to as λ λ h. At λ = λ h, a homoclinic orbit is formed. This is homoclinic orbits is formed by the stable and unstable manifolds of the saddle point coinciding. For λ > λ h, The bifurcation is described by the figure below. Definition 9 (Homoclinic and Heteroclinic bifurcation) A homoclinic bifurcation is the creation and destruction of a homoclinic orbit (i.e. an orbit that connects the unstable manifold of a saddle point back to the stable manifold of the same saddle point), as a parameter is varied. A heteroclinic bifurcation is the creation and destruction of a heteroclinic orbit (i.e., an orbit that connects the unstable manifold of a saddle point to the stable manifold of another, different saddle point), as a parameter is varied. Eample 26: Heteroclinic bifurcation In the pendulum equations with no friction c =, we find a heteroclinic orbit that connects the equilibrium (, y) = ( π, ) to the (, y) = (π, ) saddle point. Note, that there are in fact two heteroclinic orbits, the second orbit goes from (, y) = (π, ) back to (, y) = ( π, ). Making friction c non-zero, destroys the heteroclinic orbit (i.e., a heteroclinic bifurcation). For vector fields in 3+ dimensions, the mere crossing of the stable and unstable manifolds can imply chaotic dynamics. 34

35 y 2.5 y W s W u periodic orbit W s W u periodic orbit y y W s W u W s W u homoclinic orbit Figure 7: Homoclinic bifurcation. Panel () Before the homoclinic bifurcation at λ =.92. We see a periodic orbit. Note the stable and unstable manifolds of the saddle point do not coincide. Panel (2) Just before the homoclinic bifurcation at λ =.88, the periodic orbit starts to collide with saddle point. Panel (3) shows the homoclinic orbit at λ = λ h Here the stable and unstable manifolds coincide and form a loop from the saddle point back to itself. Panel (4) shows the destruction of the periodic orbit for λ > λ h. Matlab code: [pplane7.m] A classic homoclinic bifurcation that creates chaotic dynamics is the destruction of a homoclinic orbit is a 3D saddle. In this 3D situation, the destruction of a homoclinic orbit can lead to the creation of -many periodic orbits! The general system ẋ = µ + f (, y, z, λ), ẏ = µ 2 + f 2 (, y, z, λ), ż = µ 3 + f 3 (, y, z, λ), where µ i R. We assume µ > µ 2. The bifurcation parameter for the system is λ. At λ =, we have a homoclinic orbit to the saddle point. We now define δ = µ 2 /µ 3. If we change a parameter of the system λ, we destroy the homoclinic orbit. Depending on δ at the homoclinic bifurcation, chaos may be created.. If δ >, a stable symmetric periodic orbit eists for λ < but two stable non-symmetric orbits eist for λ >. 2. If δ < : no periodic orbits eist for λ < and an unstable strange attractor (chaos) eists for λ >. This is called transient chaos and the system can have a normal attractor elsewhere to settle on. Eample 27: Homoclinic bifurcation in the Lorenz equations The chaotic dynamics in the Lorenz equations (chaotic waterwheel) is created by a Homoclinic bifurcation. Recall the system is defined by the following vector field ẋ = σ( y), ẏ = ρ y z, ż = y βz. 35

36 y 3 2 Heteroclinic orbit W s (s) - W u (s) Heteroclinic orbit Figure 8: Heteroclinic orbit. Matlab code: [pplane7.m] Traditionally, we fi β = 8 3, σ = and vary ρ. The equilibria of the system are given by = (, y, z ) = (,, ), 2,3 = ( 2,3, y 2,3, z 2,3 ) = (± β(ρ ), ± β(ρ ), ρ ). The second two equilibria emerge at ρ = in a supercritical pitchfork bifurcation off the origin. For ρ >, the origin is a saddle point. The equilibria 2,3, are stable for < ρ < ρ H = At ρ H, the fied points undergo a subcritical Hopf bifurcation. If we now follow the emerging unstable periodic orbits from the Hopf bifurcation, and decrease ρ, we find the periodic orbits get close to the origin equilibrium and at ρ we have a homoclinic bifurcation. homoclinic bifurcation unstable periodic orbit subcritical Hopf bifurcation ρ =24.74 H ρ - 5 transient chaos T transient chaos strange attractor Figure 9: Bifurcation diagram of the Lorenz system and an eample of transient chaos. [Transient Chaos] and [lorenz.m] Matlab code: At the homoclinic bifurcation ρ = 3.926, the eigenvalues of the origin equilibrium are µ = 8.3, µ 2 = 2.67 µ = 7.3, and δ = 2.67/7.3 =.37 <. Hence, we have the emergence of transient chaos. Starting from certain initial conditions yields chaotic dynamics eventually decay to one of the stable equilibria 2,3. In this region, there is sensitive dependence on initial conditions and depending sensitively on the initial conditions, one either ends at 2 or 3. It is important to note that even though there may be stable equilibria, the dynamics of the system can still be very complicated! 36

37 This region of transient chaos lies in the parameter region < ρ < 24.6, where the strange (chaotic) attractor stabilises at ρ = Note there is co-eistence of the stable chaotic attractor with the stable fied points for 24.6 < ρ < Period doubling route to chaos To understand the period doubling bifurcation, we will look at a specific eample. Eample 28: The Rössler system ẋ = y z, ẏ = + ay, ż = b + z( c), (46) with (, y, z) R 3 and a, b, c R, is one of the simplest models that possesses period doubling bifurcations and chaotic dynamics. Notice that it is almost linear in that the only nonlinear term (z) occurs in the third equation. Otto Rössler wrote down this system are being inspired by the saltwater taffy machine that ehibits stretching and folding miing of taffy; see the Taffy machine movie on the Chaos & Fractals website [Taffy machine]. We set a = b =. and vary c (which we call the bifurcation parameter). We use Matlab to numerically integrate these equations. Starting from a period, periodic orbit at c = 4, we increase c in steps. At c = 6, we see the period of the orbit has doubled (i.e., it goes around twice before coming back to the start). At c = 8.5 and c = 8.7, we see further period doubling of the orbit. Eventually, at c = 9 we see chaotic dynamics. c=4:period c=6: period y y z y y z c=8.5: period 4 c=8.7: period y y z y z y y c=9: Chaos z y Schematic of the chaotic attractor Figure 2: Period doubling route to chaos in the Rössler system. The system undergoes an infinite sequence of period doubling bifurcations as c is increased, eventually leading to chaos at c 9. The schematic picture of the chaotic attractor is a Möbius strip. Matlab code: [Rössler simulations] and [rossler.m] In order to analyse the period doubling bifurcation we will introduction a Poincaré section. We choose a two-dimensional plane P, which is the Poincaré section, defined by y = say, with, z >. This section must be transverse to the flow Φ t. Suppose that we now take an initial condition in the plane Σ given by ((), y(), z()) =. 37

38 Using this initial condition, we follow the trajectory of the differential equation under the flow Φ t until it again intersects Σ, when t = t say. Then we define the map P : Σ Σ. Iterating the map is equivalent to considering consecutive intersections of the orbit with Σ i.e., {P n ( ) n Z} = {Φ t ( ) t R} Σ where P n ( ) means applying P n times i.e., P n ( ) = P (P (... P ( ))). Points jump on Σ under iteration of P. Instead of consideing the orbits of the vector field, we can now study the orbits of z Φ t P (P ()) = P 2 () y Σ Poincaré section P ( ) Figure 2: A diagram showing the Poincaré section and the flow intersecting at two points. Note the flow through the Poincaré section is effectively one-dimensional. Σ points in Σ under the iteration of P. This is one of the most important techniques in dynamical systems theory. Period Doubling Bifurcation Since solutions of the Rössler system are unique (see 2nd year Ordinary Differential Equations course), trajectories cannot intersect itself. In 2D, we cannot have a period doubling bifurcation since the orbits would self-intersect. The period-doubling in the Rössler system is due to a stretch and fold mechanism. Fold a to b and ã to ã along the horizontal line. Then glue the ends together identifying a with ã and b with b. Trajectories in the Rössler system are attracted very quickly to this object, after which they follow it in the direction of the arrow. Since this attraction is very strong, we can make the approimation of the system s behaviour by only considering this object and taking a Poincaré section on it. This process results in the one-dimensional one-humped map. To see this a bit better, we look in the chaotic region of parameter space, c = 9. We can eplore this stretching and folding mechanism even further by using a cunning idea... lets look at successive local maima of (t) of the chaotic solution and plot these against each other. The result is an almost one-dimensional map that looks rather similar to the logistic map! This one-dimensional map is an ecellent approimation of the dynamics on the Rössler attractor. Hence, we need to understand the dynamics of one-dimensional humped maps. 38

39 2 3 Σ Σ Σ P ( ) λ< λ= λ> 4 Σ 5 Σ Figure 22: Bifurcation diagram for a period doubling bifurcation. Matlab code: [Poincaré map of Rössler system], [rossler-events-poincare.m] and [rossler.m] ma (N) ma (N+) (t) ma (N+) t ma (N) Figure 23: The Net Maiumum return map. Approimate one-dimensional map of the chaotic Rössler system with a = b =., c = 9.Matlab code: [Rössler system D map], [rossler-events.-lorenz.m] and [rossler.m] 39

40 Eample 29: An eplicit Poincaré map We consider the first-order order ODE ẋ + = cos(t). This is an eample of a non-autonomous dynamical system that we ve not dealt with before. However, we can re-write this ODE as a planar vector field by introducing an auiliary time coordinate τ where = (τ) and t = t(τ) and we have τ = cos(t), t τ =, with initial conditions () = and t() =. This system is equivalent since t τ = t = τ so t and τ are the same. This is now a two-dimensional vector field and we can have periodic orbits in this system. The phase space of the system is on a cylinder (due to the cos(t) periodic forcing) that looks like t intersections of We define the Poincaré section Σ to be Σ = {(, t) : t = mod 2π}. Hence, the time of flight between successive intersections of the Σ is T = 2π. The general solution of the ODE can be found via an integrating factor: Now from integration-by-parts we have t and also from integration-by-parts, we have Hence, we have t 2 (e t ) t ) = e t cos(t), t (t) = e t e s cos(s)ds + ce t. t e s cos(s)ds = [e s sin(s)] t e s sin(s)ds, t e s cos(s)ds = [e s cos(s)] t + e s sin(s)ds. t e s cos(s)ds = e t (sin(t) + cos(t)), and the general solution (using the initial conditions () = ) is given by (t) = e t (cos(t) sin(t)) e t. 4

41 Therefore, the Poincaré map P : Σ Σ is given by P ( ) = (2π) = + e 2π 2 + e 2π (Note, we can replace by another variable, say, y). The cobweb of the map is plotted below where we see that there is a unique, stable fied point of the map that corresponds to a stable periodic orbit of the ODE y y = P() To prove that P as a globally stable fied point, we use the contraction mapping theorem (see Introduction to Function Spaces). The theorem states that if P : R R and P () P (y) k y, where k <, then P has only one fied point and the iteration n+ = P ( n ) converges to it i.e., the fied point is globally stable. So we need to compute P () P (y) = e 2π ( y) = e 2π y, and we note that e 2π <. Hence, we can use the contraction mapping theorem to prove the eistence of a unique, globally stable fied point that corresponds to a unique, stable periodic orbit of the ODE system. 4

42 6 One-dimensional maps From the previous section, we saw that the dynamics of the Rössler system are well approimated by a one-dimensional humped (parabolic) map, a bit like the logistic map n+ = r( ). Let us start by trying to understand linear maps. Eample 3: Eponential decay Consider the map n+ = 2 n = g( n ) with phase space X = R and evolution operator given by Φ t = Φ n () = g n (). Lets take = 2, then we find the following orbit = 2, =, 2 = 2, 3 = 4, 4 = ( ) n 8,..., n = 2. 2 We see that the iterates are getting smaller, in fact as n n. This is true regardless of the initial condition taken. The point = is a fied point of the map. Fied points are the map equivalent of equilibria in vector fields. Definition (Fied points) Fied points are found by solving g() =, i.e., if we apply the flow operator Φ t to then we stay where we are. Solving this equation for g() = 2 = we find that = is a fied point of the linear map. Similar to stable equilibria in vector fields, we call = an attractor. We can visualise this by drawing a phase portrait similar to that for one-dimensional odes. (a) (b) 2 n+ n+ = n n n+ = 2 n Figure 24: (a) Phase portrait for the linear map n+ = n, = 2 converging to the fied point =. 2 (b) Cobweb diagram showing the iterates. Starting at, we follow the gold arrow till we hit the blue curve ( n+ = n) to find the net iterate reading its value on the vertical ais by following the black arrow. 2 To find the net iterate 2, we then follow the second gold arrow and read of the value on the horizontal ais. Iterating the map follows the gold and black arrows. Fied points of maps (not just linear maps!) may be graphically found at points where the curves n+ = f( n) (blue curve) intersect the n+ = n (gray) curve. Note that the linear map n+ = λ n, with < λ < always has the fied point = which is an attractor. The corresponding phase portraits are qualitatively similar to Figure

43 Eample 3: Eponential growth Consider the map n+ = 2 n = g( n ) with phase space X = R and evolution operator given by Φ t = Φ n () = g n (). Lets take = 4, then we find the following orbit = 4, = 2, 2 =, 3 = 2, 4 = 4,..., n = 2 n 4. We see that the iterates are getting larger, in fact as n n. This is true regardless of the initial condition ecept =. Again = is a fied point of the map, but in this case it is a repeller. We can visualise this by drawing a phase portrait similar to that for one-dimensional odes. (a) (b) 2 n+ n+ =2 n n+ = n n Figure 25: (a) Phase portrait for the linear map n+ = 2 n, =.We see that all initial conditions close 4 to = escape to infinity. (b) Cobweb diagram showing the iterates. Starting at, we follow the gold arrow till we hit the blue curve ( n+ = n) to find the net iterate reading its value on the vertical ais 2 by following the black arrow. To find the net iterate 2, we then follow the second gold arrow and read of the value on the horizontal ais. Iterating the map follows the gold and black arrows. Note that the linear map n+ = λ n, with λ > i.e., λ < or λ >, always has the fied point = which is a repeller. The corresponding phase portraits are qualitatively similar to Figure Cobwebs A useful tool in analysing D maps n+ = f( n ) are cobwebs. The iteration n+ = f( n ) can we written in two stages as y n = f( n ), n+ = y n. By drawing the two curves y = f() and y =, the iteration can be represented graphically as follows. Given an initial condition, draw a vertical line until it intersects the graph y = f(); that height is the output y. Now draw a horizontal line till it intersects the diagonal line y = ; this point is now = y. To compute 2, now move vertically to the curve again. Repeat the process n times to generate the first n points. Cobwebs are useful since they allow us to see global behaviour at a glance allowing us us to piece together the linearised dynamics similar to phase portraits for vector fields. The figures below show the iterate and cobweb for the Logistic map n+ = 2 n ( n ) with =.. These figures are made with the Matlab code [cobweb.m] 43

44 2 n+ = y n y n = f( n ) 2 n y = y = f() n Topological conjugacy and linearisation Similar to vector fields, we can define topological equivalence for maps. For maps however, a weaker property that is useful is topological conjugacy. Definition (Topological Conjugacy: maps) Suppose we have two maps n+ = f( n ), (47) y n+ = g(y n ). (48) Then (47) on domain U is topologically conjugate to (48) on a domain V, if we can find a continuous and invertible map (i.e., a homeomorphism) h : U V that maps orbits of (47) to orbits of (48) i.e., h(f()) = g(h()). There is a slightly weaker property of topological conjugacy called topological semi-conjugacy in which the map h is only continuous and onto. Topological Equivalence is when the maps are topologically conjugate and the direction of time is respected between the orbits from the maps f and g. Suppose we now have a general nonlinear map n+ = f( n ), R, f : R R, and we can find a fied point f( f ) = f. We wish to now if this fied point is attracting or repelling. To do this we will consider the evolution of a small perturbation of the fied point n = f + n, 44

45 where n (i.e., very much less than ). Now substituting this into our map yields f + n+ = n, = f( n ), = f( f + n ), This leads to the simple linear map f( f ) + f ( f ) n, by Taylor series epansion of f( f + n ), = f + f ( f ) n, since f = g( f ) is a fied point. n+ = f ( f ) n = µ n, which is a good approimation to f near to the fied point f provided that n is small. µ is called a Floquet multiplier and is similar to an eigenvalue of the Jacobian for ODEs. Now we know that if < µ <, then n as n. Hence n f as n and f is called an attractor. µ >, then n as n. Hence n becomes large and diverges from f. We call f a repeller. When is linearisation of maps ok? Similar to differential equations we have the following version of the Hartman & Grobmann theorem to tell us. Theorem 6. (Hartman & Grobmann : maps) If the map n+ = f( n ), (49) has a hyperbolic fied point f, that is f ( f ), then there eists a neighbourhood U of f, such that (49) on U is topologically equivalent to the linearised system on an (arbitrary) neighbourhood V of the origin. At µ = or µ =, we have a bifurcation. n+ = f ( f ) n, Definition 2 (Local bifurcation: maps) Consider the map n+ = f( n, λ),, λ R. A local bifurcation occurs at ( b, λ b ), where ( b, λ b ) is a fied point i.e., f( b, λ b ) =, then f ( b, λ b ) =. The phase portraits of the map are qualitatively (topologically) different for λ < λ b and λ > λ b. As with vector fields, we also have a saddle-node bifurcation (the creation of two new equilibria) and a transcritical bifurcation (the old equilibrium remains and a new equilibrium is created) Theorem 6.2 (Saddle-node bifurcation: maps) Let n+ = f( n, λ), (5) with, λ R. If the following conditions hold: (B) f( b, λ b ) = b b is a fied point for λ = λ b, (B2) f ( b, λ b ) =, Unit eigenvalue at b for λ = λ b, 45

46 (G) f ( b, λ b ) Second order term of f does not vanish at fied point, (G2) f λ ( b, λ b ) Positive speed in λ, then (5) has the topological normal form in a neighbourhood of ( b, λ b ). y n+ = λ + y n ± y 2 n, Theorem 6.3 (Transcritical bifurcation: maps) Let with, λ R. If the following conditions hold: (B) f( b, λ b ) = b for all λ b R, b is a fied point for all λ, (B2) f ( b, λ b ) =, Unit eigenvalue at b for λ = λ b, n+ = f( n, λ), (5) (G) f ( b, λ b ) Second order term of f does not vanish at fied point, (G2) f λ ( b, λ b ) Positive speed in λ, then (5) has the topological normal form in a neighbourhood of ( b, λ b ). y n+ = ( + λ)y n ± y 2 n, Eample 32: Saddle-node bifurcation Consider the map n+ = f( n, λ) = λe n, n R, λ R. Fied points of this map can be found graphically by plotting the graphs y = and y = λe, and looking for intersections of the graphs. For λ > e, the graphs do not intersect. At λ = e, there is 5 4 y = λe 3 2 λ fied points y = e λ one intersection (a fied point) and for λ < e there are two intersections (two fied points). This indicates that there is a saddle-node bifurcation at (, λ) = (, e ) where two fied points are either created (as λ is decreased) or destroyed (as λ is increased). We can check the bifurcation conditions at (, λ) = (, e ) (B) f(, e ) =, (B2) f (, e ) =, (G) f (, e ) =, 46

47 (G2) f λ (, e ) = e. Eample 33: Logistic map: transcritical bifurcation Consider the logistic map n+ = r n ( n ) =: f( n ), n [, ], r [, 4]. We find two fied points f( f ) = f, f = or r, only the first of which depends on the parameter r. Hence we satisfy (B). Note that the second fied point only eists for r > and at r = both fied points are identical indicating a transcritical bifurcation. We find the stability of these fied points by computing f () = r( 2) and evaluating at the fied points we find f () = r f = is stable provided r <, f ( r ) = 2 r f = is stable provided < r < 3. So at r = +, we have a bifurcation of both points. For the fied point =, we have f (, ) = + satisfying (B2). The genericity conditions are also satisfied (G) f (, ) = 2, (G2) f r (, ) =. Hence we have a fied point bifurcation. The new emerging fied point is in fact just = r. (a) n+ (b) r> = r r = r< n r Figure 26: (a) Cobweb diagram showing the emergence of a new fied point for r >. diagram of the logistic map. (b) Bifurcation Top tip: You can quickly tell the difference between a saddle-node bifurcation and a transcritical bifurcation (without checking the bifurcation conditions) by drawing the graphs y = and y = f(, λ) and looking for intersections (corresponding to fied points). If you see two fied points being created/destroyed as λ is varied, it is a saddle-node bifurcation. If you only see one new fied point being created then it is a transcritical bifurcation. 47

48 6.3 Period doubling bifurcation Definition 3 (Periodic orbits: maps) We define a periodic orbit in a map n+ = f( n ) to be the sequence of iterates,,, n, n <, where each iterate i is not equal to any other iterate in the sequence and f n ( ) =. We say is a period n point i.e., n = Note that we may consider periodic orbits to be fied points of the map n+ = f n ( n ). How does one go from a fied point to a periodic orbit? Via a period doubling bifurcation (sometimes called a flip bifurcation). Theorem 6.4 (Period doubling bifurcation) Let with, λ R. If the following conditions hold: n+ = f( n, λ), (52) (B) f( b, λ b ) = b (B2) f ( b, λ b ) =, (G) 2f ( b, λ b ) 3(f ( b, λ b )) 2 (G2) c = f λ f + 2f λ ( b, λ b ), then (52) has the topological normal form y n+ = ( + λ)y n ± y 3 n, (53) in a neighbourhood of ( b, λ b ). If c > then the bifurcation is said to be subcritical with the normal form y n+ = ( + λ)y n y 3 n, while the bifurcation is supercritical if c < with the normal form y n+ = ( + λ)y n + y 3 n. Eample 34: Logistic map Lets eamine the logistic map again n+ = f( n, r) = r n ( n ), λ R has the fied point = r that is stable for < r < 3. At r = r b = 3, the fied point is = 2 3 and we find f ( b, r b ) = f (2/3, 3) =. Hence we satisfy conditions (B) and (B2). For the genericity conditions we find (G) 2f ( b, r b ) 3(f ( b, r b )) 2 = 8 (G2) f r ( b, r b )f ( b, r b ) + 2f r ( b, r b ) = 5 3 < hence the bifurcation is supercritical. The new emerging periodic orbit is found by finding fied points of the map f 2 (, r) = f(f(, r), r) i.e., solving f(f(, r), r) =, r[r( )]( [r( )]) =, r 3 4 2r (r 3 + r 2 ) 2 + ( r 2 ) =. 48

49 Now in general, it is very hard to solve quartic polynomials. However, there is a trick that will help us... any fied point of f(, r) is also a fied point of f 2 (, r) since f(f(, r), r) = f(, r) =. Hence the fied points of the logistic map are roots of the above quartic polynomial and we can factorise to find r 3 4 2r (r 3 + r 2 ) 2 + ( r 2 ) = P (, r)[f(, r) ] where P (, r) = r 2 2 (r 2 + r) + (r + ). Finding roots of P (, r) yields p+,p = r 2r ± 2r (r + )(r 3) The two solutions are the new periodic orbit. It can be shown that this periodic orbit is stable for r (3, + 6) by considering the map n+ = f 2 (, r) and linearising this map about the above roots of P (, r). In fact the logistic map has an infinite sequence of period doubling bifurcations eventually resulting in chaotic dynamics. This is one of the most common ways in which both ODEs (for instance the Rössler system (46) and Lorenz system (4)) and maps go from order to chaos; known as the period doubling route to chaos.. periodic window.8 p = r period doubling bifurcations p chaos Figure 27: Bifurcation diagram of the logistic map showing the infinite sequence of period doubling bifurcations accumulating at r = r = corresponding to the onset of chaotic dynamics. Matlab code: [Logistic map bifurcation diagram] r The values of the period doubling bifurcations for a 2 n -periodic orbit are r = 3 period 2-orbit is born, r 2 = + 6 = , r 3 = , r 4 = , r 5 = ,.. r =

50 We see that successive bifurcations occur closer and closer together, converging to a limiting value r. If we calculate the distance between successive period doubling bifurcations we find it distance shrinks by a constant factor (called Feigenbaum s constant) r n r n δ = lim = n r n+ r n Amazingly, we see this factor and period doubling route in many other one-hump maps e.g., n+ = r sin(π n ). Beyond r chaotic dynamics are observed but not for all values r > r. For eample, at approimately r = 3.83, we see a periodic window, containing a stable period-3 orbit. 6.4 Periodic windows and Intermittency Lets us eamine in more detail, the periodic window containing a stable period-3 orbit at approimately r = This is the largest of the periodic windows, and they are all created in much the same way. Hence, we will concentrate on the biggest periodic window. Looking at the bifurcation r diagram, Figure 27, we see that as r is increased (in the periodic window) there is another period doubling cascade to chaos. This period doubling sequence is qualitatively similar to the one we have seen before. This qualitatively similarity in the bifurcation is known as self-similarity and we will eplore this further later. Also, as r is decreased, at r 3.828we see an immediate change to chaos rather than a period doubling cascade. y y = f 3 () r =3.8 r = fied points.2. period-3 points So how is a stable period-3 orbit created? A period-3 orbit satisfies the polynomial = f 3 () = f(f(f())), 5

51 where f() = r( ). Trying to analytically solve this polynomial problem is a bit tricky since the polynomial is of degree 8. Instead, we will use the graphical method of looking for intersections of the graphs y = and y = f 3 (). We see at r = 3.84, we see 8 intersections of the graphs y = and y = f 3 (): two of these are fied points, the other si period-3 points (3 stable, 3 unstable). Decreasing r to r = 3.8, we see the 6 period-3 points have disappeared and there must have been a saddle-node bifurcation. It can be shown the saddle-node bifurcation occurs at r = + 8 = Just below this value for r (say at r = ), the logistic map ehibits intermittency; chaotic dynamics looks a bit like a stable period-3 orbit for a short time. However, the period-3 orbit doesn t eist! We are seeing a ghost of the period-3 orbit... n chaos nearly period-3 orbit Figure 28: Chaotic orbit of the Logistic map for r = showing intermittency. Matlab code: [Intermittency] n Graphically, we can understand this intermittency by looking at the graphs y = and y = f 3 () at r = and plotting a cobweb trajectory. We see there are three narrow channels between the y = f 3 () graph and y =. These channels get bigger as r is further decreased away from the saddle-node bifurcation. Zooming-in near one of these channels, we see an orbit may take many iterations to pass through the small channel. In this channel, n f 3 ( n ) and so the orbit resembles the period-3 orbit. Eventually, the trajectory passes through the channel and becomes chaotic until y y = f 3 () Figure 29: Cobweb diagram of the cubic composition of the Logistic map i.e. n+ = f 3 ( n, r) at r = Matlab code: [Cobweb Intermittency] it comes back to one of the small channels. This intermittent chaos is observed in many systems where the transition from periodic to chaotic behaviour occurs due to a saddle-node bifurcation of periodic orbits. Intermittency is observed in eperimental systems where nearly periodic motion is interspersed with irregular busts of aperiodic chaotic motion. As the eperimental control parameter is moved away from the saddle-node bifurcation, the bust become more frequent until the system becomes fully chaotic. This is known as the intermittency route to chaos. 5

52 This intermittent behaviour is typical of systems near a saddle-node bifurcation. In order to analyse this a bit, we first will look at the normal form for a saddle-node bifurcation in D ẋ = r 2. Taking r < we have no equilibria. We can solve this ODE with () = to find ( (t) = tan t [ ]) r. r + arctan r Hence, we find t ( ) ( ) (t) r = arctan arctan. r r Now, we can use this to find the time it takes (t) to go from = to = i.e, t ( ) r = 2arctan, r and as r from below, we find t 2π r. Hence, as r gets smaller and smaller the length of time spent in the interval [, ] grows. { t r λ = λ = λ So what does this calculation have to do with the intermittency seen in a map? We can approimate the continuous derivative with a discrete approimation Hence, substituting this into ẋ = r 2, we find ẋ n+ n, t n+ = n + t(r 2 n), and we would epect the dynamics of this map to closely match those of the continuous ODE provided t is small. If we employ the rescaling n n / t, we find this map reduces to the normal form for a saddle-node in maps n+ = r + n 2 n. The bifurcation diagram for this map is the same as for the ODE and we would then epect the time spent in the interval [, ] to also approimately scale like t / r. 52

53 6.5 Period 3 Implies Chaos The eistence of a period-3 orbit in a one-dimensional continuous map allows us to say far more about the eistence of other periodic-orbits! Theorem 6.5 Consider the one-dimensional map n+ = f( n ). If f is a continuous function and the map has a period-3 orbit, then the map also has periodic orbits of all other periods. This theorem says nothing about the stability of all the other periodic orbits (hence, we don t necessarily see them in the period-3 window). It can also be shown that the eistence of a period-3 orbit implies the eistence of infinitely-many orbits of the map that are not periodic. Before proving this theorem, we shall first prove two other lemmata. Lemma Let I = [a, b] denote an interval and f be a continuous map. If f(i) I then f has a fied point in I. Proof. Let f(i) = [c, d] denote the interval that f maps I to. Since f(i) I, we have c a < b d. Furthermore, there eists constants α a and β b such that f(α) = c, f(β) = d. To show that there eist a solution of the fied point problem f() =, we consider the function g() = f() and we wish to show that there eists an such that g( ) =. To do this we note, and g(α) = f(α) α, = c α, c a,. g(β) = f(β) β, = d β, d b,. Now, by the Intermediate Value Theorem (see MAT24 Pure (Real Analysis 2)), there eists an between α and β such that g( ) = f( ) = and this is a fied point of f. Corollary If f n (I) I, then f n has a fied point in I and hence f has a period-n point in I. Lemma 2 Let f be a continuous map. If I and J are two closed intervals such that g(i) J, then there eists a closed subinterval I I such that f(i ) = J. Proof. Let I = [a, b] and J = [c, d]. Now c J f(i) and so there eists a I such that f(a ) = c. Similarly, d f(i) and so there eists b I such that f(b ) = d. There may in fact be several values a, b satisfying this condition. If b a, then we define I = [a, b ], otherwise we define I = [b, a ]. Now we may choose a and b such that there does not eist a γ (a, b ) such that f(γ) = c or d otherwise we could re-define a = γ if f(γ) = c or b = γ if f(γ) = d. Hence, f(i ) = J. 53

54 We are now in a position to prove theorem 6.5. Proof of Theorem 6.5 We start by considering the period-3 orbit of the map f and we denote, and 2 to be the period-3 points. We may assume without loss of generality that < < 2. (otherwise the points have the ordering < 2 < and in what follows we may re-define I = [ 2, ] and I [, 2 ] and the rest of the proof remains unchanged.) We define the two intervals I = [, ] and I = [, 2 ]. Since f( ) =, f( ) = 2 and f is continuous, then f(i ) I. (54) Similarly, since f( ) = 2, f( 2 ) = and f is continuous, then Hence, it follows that We now construct periodic orbits of all periods: f(i ) I I. f(i ) I, (55) f(i ) I. (56) Period : From (56) and Lemma, we can conclude that f has a fied point in I. Period 2: Applying Lemma 2 to (55) we can infer the eistence of an interval I I such that f(i ) = I. Now from (54), we have f(f(i )) = f(i ) I I, and now we can apply Lemma to find that f 2 has a fied point in I (and hence a period-2 orbit of f). Since f(i ) = I, then the other point of the period-2 orbit is in I and so there is a non-trivial period-2 orbit that oscillates between the intervals I and I. Period n > 3: The idea is to use Lemma 2 n 2 times to generate a nested set of n 2 subintervals of I all of which contain a fied point of f. In order to complete the construction, we need two more intervals A n and A n ; see Figure below. A n- A n A 2 A 2 A n-2 54

55 Using Lemma 2 with (56), we can infer the eistence of an interval A I such that f(a ) = I. Since A I and f(a ) = I A, we can use Lemma 2 again to find another interval A 2 A such that f(a 2 ) = A. Continuing this process n 2 times, we generate a collection of intervals A i, i =, 2,..., n 2 satisfying A n 2 A n 3 A 2 A I, with f(a i ) = A i, i = 2,..., n 2. It can be shown that all these intervals contain a fied point of f. Furthermore, we have f n 2 (A n 2 ) = I. Now we need two more intervals A n and A n to complete the construction. From (54) we have f(i ) I and we also know that I A n 2. Hence, by Lemma 2, there eists an interval A n I such that f(a n ) = A n 2. Finally, since f(i ) I A n, there is an interval A n I such that f(a n ) = A n. We now have n subintervals such that f(a i ) = A i, i = 2,..., n and f(a ) = I. Thus, f n (A n ) = I A n, and so by Corollary, f has a period-n point. It can be shown that the minimal period of this period-n point is n. 6.6 Lyapunov Eponents The sensitive dependence on initial conditions property of chaotic systems describes how (arbitrarily) small differences in initial conditions get amplified as the iteration proceeds. A Lyapunov eponent characterises the average rate of growth of small differences. It can also be used to calculate stability of periodic orbits. Lyapunov eponents can be defined for both ODEs and maps. We will just concentrate on Lyapunov eponents for maps. The key idea is this: we consider an initial condition and its orbit n. We then consider a small perturbation + ɛ and its orbit n + ɛ n. Here, ɛ n measures the separation between the two orbits n and n + ɛ n. If ɛ n ɛ e nλ, then λ is called the Lyapunov eponent. A negative λ implies that ɛ n and the orbits converge. On the other hand, a positive Lyapunov eponent tells us the orbits diverge and is a signature of chaotic dynamics. To derive a formula for computing λ for a map n+ = f( n ), where f( n ) is a continuous and differentiable function, we start with the relation ɛ n ɛ e nλ, and rearranging for λ, taking logs of both sides, we arrive at λ n ln ɛ n. ɛ 55

56 Using the fact the separation between the two orbits ɛ n = f n ( + ɛ ) f n ( ), we find λ n ln ɛ n ɛ = n ln f n ( + ɛ ) f n ( ) ɛ = n ln d d (f n ( )), (taking the limit as ɛ ), = n n ln d d f( k), (using chain rule on (f n ( )) ) k= = n ln f ( k ), n k= (replacing the log of a product with the sum of logs). Definition 4 (Lyapunov Eponent) The Lyapunov eponent of a D map n+ = f( n ) is defined as n λ = lim ln f ( k ), (57) n n if the limit eists. k= We can use the Lyapunov eponent to tell us about stability of periodic orbits as well as sensitive dependence on initial conditions Eample 35: Condition for stable periodic orbits that m =. Then the Lyapunov eponent becomes Suppose that is a period m point so λ = m m k= ln f ( k ). The periodic orbit is stable if λ < i.e., ɛ n = ɛ e nλ. Hence a periodic orbit is stable if m k= f ( k ) <. Eample 36: Lyapunov eponent of the Logistic map In general, one needs to numerically calculate Lyapunov eponents. We do this for the Logistic map n+ = r n ( n ). We fi r and compute, for a random initial condition, the first 3 iterates of the Logistic map (note we need only store the current iterate). We then compute the net, iterates of the Logistic map, and for each iterate we compute ln r 2r n and add this to a running of logs. Finally, we divide the total sum of logs by,. Carrying out this procedure for a range of values of r, yields the Figure 3. We see the Lyapunov eponent is negative for the stable periodic orbits. At each period-doubling bifurcation, the Lyapunov eponent becomes zero, indicating a bifurcation. In the chaotic region, we see the Lyapunov eponent is positive with interspersed dips in the periodic windows. These dips occur when the periodic orbit of the Logistic map has each iterate i with f ( k ) = i.e., the Lyapunov eponent is ln() =. When this occurs, the orbit is said to be super-stable. Orbits that are super-stable have one of the iterates is k = 2 so that f(/2) is at the maimum of the map. 56

57 . periodic window.8 p = r period doubling bifurcations p chaos Lyapunov eponent period doubling bifurcations r r chaos Figure 3: Plot of the Lyapunov eponent of the Logistic map as r is varied. eponent Logistic map] Matlab code: [Lyapunov 6.7 Universality and Re-normalisation Qualitatively, the same period-doubling sequence is observed in a variety of maps and differential equations e.g., the sine map n+ = r sin(π n ), r, n R, and the Rössler system ẋ = y z, ẏ = + ay, ż = b + z( c), (, y, z) R 3, a, b, c R. The bifurcation diagrams for both the Rössler system and the sine map are shown in the figures below. For the sine map, we see the bifurcation diagram looks amazingly similar to that of the Logistic map shown in Figure 27 ecept the bifurcation parameter now runs from to rather than to 4. The scaling of the bifurcation diagram by a factor of 4 is impart, due to the sine map having a maimum of r (at = 2 ) whereas the Logistic map has a maimum of 4 r (also at = 2 ). However, the similarity is only qualitative! In the sine map, the period doubling bifurcations occur earlier and the periodic windows are narrower. 57

58 3 Rossler system Sine map ma (N) period-doubling period-doubling period-3 window 5 4 c 8 period-3 window.7 r Figure 3: Bifurcation diagrams for the Rössler system and the sine map n+ = r sin(π n). Matlab codes for generating the bifurcation diagram for the Rössler system can be found here and for the sine map here. Matlab codes: [Bifurcation diagram: Rössler] and [Bifurcation diagram: Sine map] The bifurcation diagram for the Rössler system is also amazingly similar to that of the Logistic map with period-doubling bifurcations and periodic windows (in particular the famous period-3 window also seen in the Logistic map and sine map!). What is making the Rössler system, sine map and Logistic map behave similarly? They are either one-humped, unimodal maps (in the case of sine and Logistic map) or the dynamics can be reduced to approimately a unimodal map (in the case of the Rössler system; see Figure 23). Definition 5 (Unimodal maps) Unimodal maps have a single maimum, smooth and concave down. In particular, if a unimodal map satisfies the following conditions then the map n+ = rf( n ) will undergo the period-doubling route to chaos. f() = f() =, 2. f is a smooth function which has a quadratic maimum at m, i.e., f ( m ), 3. f is monotonic in the intervals [, m ) and ( m, ], 4. f has a negative Schwarzian derivative i.e., f () f () 3 2 ( f () f () ) 2 <. For the discussion that follows, we consider a unimodal map f(, r) that undergoes a period-doubling route to chaos as r is increased, with the maimum value of f occuring at m (in the Logistic map and sine map, m = 2 ). Furthermore, we denote r n to be the value of r where a 2 n -cycle is born and R n is the r value where the 2 n -cycle is super-stable i.e., each iterate i of the periodic orbit has f ( i ) =. Eample 37: Super-stable orbits in the Logistic map Logistic map n+ = r n ( n ), Fied points (period- orbits) of the are given by = and = r. Looking at the Floquet multipliers of the fied points shows that f (, r) = at r = and f ( r, r) = at r = 2. Hence, the fied point = is super-stable when r = and the fied point = r is super-stable at r = 2 i.e., R = 2. At r = 3, we found that the fied point = r underwent a period-doubling bifurcation where a period-2 cycle was born ± = r + ± (r 3)(r + ). 2r 2r 58

59 In the section on Lyapunov eponents, we saw that a periodic orbit is stable if m k= f ( k ) <, where = m. Hence, the period-2 cycle is super-stable if i.e., f ( + ) f ( ) =, f ( + ) f ( ) = r( 2 + )r( 2 ), = r 2 [ 2( + + ) ], = r 2 [ 2(r + )/r + 4(r + )/r 2 ], = 4 + 2r r 2 =, when r = + 5. Hence, the period-2 cycle is super-stable when r = + 5 i.e., R = + 5. Note, that the period-2 cycle undergoes a period doubling bifurcation at r = + 6. General rule for unimodal maps: a period-2 n cycle is born in a period-doubling bifurcation, becomes super-stable (hence the dips in Figure 57), and then becomes unstable in a period-doubling bifurcation; where the process is repeated but for the now stable period-2 n+ cycle. The super-stable cycle always contains m as one of its points. Graphically, the location of the super-stable 2 n -cycles can be found by looking at where the line y = m intersects with the bifurcation curves in the following Fig-tree diagram. If successive r n m R r R r 2 R 2 r 3 r (the period-doubling bifurcation points) are shrinking by the universal factor δ 4.669, then so do R n (the super-stable points). The key idea for showing that the ratio between successive R n R n R n δ = lim = n R n+ R n is a universal constant, is the self-similarity of the fig-tree diagram: The branches look the same as those higher up the tree, just scaled in both r and. It is this similarity that leads to the self-similar period-doubling sequence in the periodic windows described in 6.4. To understand this self-similarity better, lets look at the graphs of y = f(, r) at r = R and y = f 2 (, r) at r = R. We then want to transform, (re-normalize), one map into the other by a scaling in r and. This seems like a good idea since the fied point m of both the maps, are super-stable. We take the Logistic map as a specific eample just for the moment and plot f(, R ) and f(, R ). We see that around m, the graphs look qualitatively the same if we employ a flip of the = 2 59

60 f(, R ) f 2 (, R ) flip m m f 2 (, R )-graph near m. Our first step in the analysis of this transformation is a shifting m to the origin by letting = + m hence n+ = f( n, r) n+ = g( n, r). Now, the function g(, r) := f( m, r) m has a maimum at = where g(, r) =. We now plot the same transformation shown in the figures above but to g. We can make the middle figure g(, R ) g 2 (, R ) αg 2 ( α,r ) iterate rescale by α = look like the first if flip the figure (i.e., (, y) (, y)) and then blow it up by a factor α >. These two transformations can be done in one go if we choose the scale factor α to be negative. So to renormalize g, we take its second iterate, rescale /α, and shift r to the net super-stable parameter value i.e., ( ) g(, R ) αg 2 α, R. For each super-stable parameter value, we can carry out the same renormalization procedure (on g ( 2 α, R ) ) n times to find ( ) g(, R ) α n g (2n ) α n, R n. If α = is chosen correctly, then in the limit as n it is found ( ) lim n αn g (2n ) α n, R n = h ( ), where h is a universal function with a super-stable fied point at the origin =. The limit only eists if α is chosen correctly. The special thing about universal functions is that they only see the (local) maimum of g and they do not depend on the global information of g. This occurs since /α n as n and so h only depends on g through its behaviour near =. However, h does depend on the order of the maimum of g, that is h is the same (universal) for all quadratic-maimum maps (maps with f ( m ) ). A different universal h is found for all quartic-maimum map i.e., maps with f ( m ) =, f ( m ), though this is not generic. We can find other universal functions h i by starting with g(, R i ) instead of g(, R ) to find ( ) h i ( ) = lim n αn g (2n ) α n, R n+i, where h i is a the universal function with a super-stable 2 i -cycle. 6

61 The really important case is the universal function h with R since ( ) g(, R ) αg 2 α, R, and we see for once we don t have to shift R when we renormalise! The limiting universal function h ( ), usually called h( ), satisfies the functional equation ( ) h( ) = αh 2. (58) α Solving this functional equation requires finding the universal scale factor α and the universal function h( ). This is a hard equation to solve! Functional equations are like differential equations and to solve them we need to define some boundary conditions. Since our shifted map g( ) has a maimum at we require h () =. Also, we can set h() = without loss of generality (This just defines the scale for ; if h( ) is a solution of (58) then so is βh( /β), with the same α). At = the functional equation yields h() = αh(h()), substituting in the boundary condition h() =, we find α = h(). Hence, α is determined by g at =. Now the hard part... finding a closed form solution of g; so far no one has managed it. Feigenbaum resorted to be power series approimation h( ) = + a a 4 4 +, which assumes the maimum of g is quadratic. The coefficients a 2, a 4,... are found substituting the power series into (58) and equating coefficients for like powers of. Taking seven terms in the power series approimation, we find a , a Evaluating h() using these coefficients, we find the scale factor α This renormalisation theory can also give us the Feigenbaum constant δ = but this requires a lot of Functional Analysis; see third year Introduction to Function spaces (MAT34)! Instead, we will follow the algebraic calculations done in chapter.7, Strogatz s book, Nonlinear Dynamics & Chaos [5]. We start with a unimodal map f(, r) that undergoes an infinite sequence of period-doubling bifurcations. From Theorem 6.4 we know near each period-doubling bifurcation, the unimodal map is topologically equivalent to the normal form (53). However, we wish to consider a slightly different normal form y n+ = f(y n, λ) := ( + λ)y n + y 2 n, where we have ignored the positive cubic term, y 3 n in (53). Algebraically, all maps near a perioddoubling bifurcation have this form. Now, we know for λ >, there eist a period-2 cycle = p and = q such that f(p, λ) = q, f(q, λ) = p, i.e., f 2 (p, λ) = f(f(p)) = p and f 2 (q, λ) = f(f(q)) = q. We can solve for p and q to find Note that p is a fied point of f 2. p = λ + λ 2 + 4λ 2, q = λ λ 2 + 4λ. 2 We now shift the origin to p and look at the local dynamics similar to the first renormalisation step in shifting the location of the super-stable points R i. To do this, we epand p + η n+ = f 2 (p + η n, λ), in powers of the small perturbation η n. After a bit of algebra and neglecting higher order terms, we find η n+ = ( 4λ λ 2 )η n + Cη 2 n +... (59) 6

62 where C = 4λ + λ 2 3 λ 2 + 4λ. (6) Algebraically, the map (59) has the same form as the period-doubling normal form (53). To turn it into eactly the same as the period-doubling normal form, we need to carry out the second renormalisation step; rescaling η. To do this, let ỹ n = Cη n, we (59) becomes ỹ n+ = ( 4λ λ 2 )ỹ n + ỹ 2 n +... and we define a new parameter λ such that ( + λ) = ( 4λ λ 2 ) then we get ỹ n+ = ( + λ)ỹ n + ỹ 2 n, (6) which is eactly the same as the period-doubling normal form. Note, C plays the role of the rescaling factor α in the renormalisation theory. Now, when λ =, the renormalised map (6) undergoes a period-doubling bifurcation corresponding to the birth of a period-4 cycle in the original unimodal map f. Solving λ =, we find λ = This value predicts the creation of the period-4 orbit in the Logistic map since at r = r = 3 (the first period-doubling bifurcation) corresponds to λ =, hence r 2 = 3 + ( 2 + 6) = + 6 (the location of the second period-doubling bifurcation in the Logistic map!) Since (6) has the same form as the period-doubling normal form, we can carry out the same renormalisation transformation over and over again, until the onset of chaos! Let λ k be the bifurcation value where the original map creates a period-2 k orbit. So far we have found λ = and λ 2 = In general, λ k satisfy Solving for λ k we get λ k = µ 2 k + 4λ k 2. λ k = λ k. This is a D map for the period-doubling bifurcation points with λ =. This map has a stable fied point λ given by λ = ( 3 + ) For the logistic map r 3+λ 3.56, whereas the actual value is r 3.57! Not bad considering the massive approimations being made. Recall that the Feigenbaum constant δ is r k r k λ k λ δ = lim lim. k r k+ r k k λ k λ Since this ratio tends to / as k, we may use L Hôpital s rule to find δ dλ k dλ k = 2λ + 4 = λ=λ This approimation is within % of the true value δ = Substituting λ into C, we find C 2.24 which is also within % of the true value α =

63 7 Chaotic D maps The logistic map isn t the only D map that ehibits chaotic dynamics. Two simpler eamples are the Doubling map and the Tent map. 7. The Doubling map The Doubling map is defined as n+ = D( n ) = { 2 if [, 2 ), 2 if [ 2, ] (62) Here the interval [, ] gets stretched to twice its length, cut in half, and then each half is mapped onto the interval [, ]. This is a discontinuous version of the stretch and fold mechanism for chaos. This is similar to rolling out pastry, cut it in half, and overlay it overlay The Tent map The tent map is defined as n+ = T ( n ) = { 2 if [, 2 ), 2 2 if [ 2, ] (63) Here the interval [, ] gets stretched to twice its length, folded in half, and then each half is mapped onto the interval [, ]. This may is continuous but not differentiable at = 2. It is called the Tent map since the graph looks like a tent. 2 fold Symbolic Dynamics Let us start by looking at the Doubling map. We wish to understand the orbits of the Doubling map (some orbits are boring! e.g, =. We will ignore these). Our aim is to somehow link the orbits of the Doubling map to a simpler map that we can analyse. 63

64 To do this we will write down a every time an iterate i lands in the interval [, 2 ) and a every time an iterate lands in the interval [ 2, ]. Now we can write down a sequence of s and s that represent a trajectory as where s D ( ) =.a a a 2,..., a i {, }, a i = { if D i ( ) [, 2 ), if D i ( ) [ 2, ] We call the sequence s D ( ) the itinerary of. For eample, if we consider the iterates of =. under the Doubling map, we get Then its itinerary is.,.2,.4,.8,.6,.2,.4,... s D (.) =.... since =. [, 2 ), hence a =,.4 [, 2 ) a = and.8 [ 2, ] a 2 =, etc. The space of one-sided sequences (i.e., sequences that etent infinitely long in one direction) called the sequence space. We define the sequence space on two symbols ( and ) as follows = {s =.a a a 2... a i = or} = {, } N 2 We have specified a method to transform an orbit of the Doubling map to an itinerary in the space 2. This can be defined as the map s D : [, ] 2 s D (). This map is invertible i.e., for each i [, ] there is eactly one a i 2. For the Doubling map, we can write down eplicitly the inverse map s D s D (.a a a 2...) = 2 ai [, ]. 2i Hence, s D associates the binary epansion to [, ]. Another way to see this is by the fact that the Doubling map is actually the map f() = 2(mod). We wish the space of itineraries 2 to act like the dynamics of the Doubling map. To do this we will define the shift map σ : 2 2 where σ(.a a a 2 a 3...) =.a a 2 a 4..., so σ forgets the first symbol of the sequence a. Since a may be either or, σ is a two-toone map of 2. The Doubling map D behaves on [, ] eactly like the shift map σ on 2 i.e., D = s D σ s D and is represented by the diagram D [, ] [, ] s D s D 2 2 σ So to understand the dynamics of the Doubling map, we now need to just understand the dynamics on 2 which is easier. First though we need some preliminary results. The space 2 is a metric space with distance d d(s, t) = i= 64 s i t i 2 i

65 between sequences s =.a a a 2... and t =.b b b 2... Since s i t i is either or, this sequence is dominated by the geometric series i= and therefore it converges (reassuring to know!). 2 i = 2, Eample 38: If s =.... and t =.... then d(s, t) = 2. If r =...., then d(s, r) = j= 2 j = 2 2i = 4 i= = 4 3, where j = 2i. The metric d allows us to decide which subsets of 2 which sequences that are close to each other. are open and which are closed, as well as Lemma 3 Let s, t 2 and suppose s i = t i for each i =,,..., n. Then d(s, t) 2 n. Conversely, if d(s, t) < 2 n, then s i = t i for i n. Proof. If s i = t i for i n, then d(s, t) = n i= i=n+ s i s i 2 i + 2 i = 2 n. i=n+ s i t i 2 i, On the other hand, if s j t j for some j n, then we must have d(s, t) 2 j 2 n. Consequently, if d(s, t) < 2 n, then s i = t i for i n. This result allows us to ascertain whether or not two sequences are close to each other. In other words, two sequences are close to each other in 2 provided their first few entries agree. defined by a σ(a) sat- Theorem 7. (Chaotic dynamics of σ) The dynamical system on 2 isfies the following properties:. σ has many periodic orbits, 2. σ has many non-periodic orbits, 3. σ has dense periodic points 4. σ has a dense orbit, that is, orbits that come arbitrarily close to any point in 2 Proof.. Any sequence of the form s =.a a a 2... a k (repeated sequence) is periodic. 65

66 2. Consider the non-periodic orbit s =...., made up of blocks of size n =, 2,... containing n zeros and a. Also σ i (s ) is non-periodic for any i. 3. Dense periodic points: Let s =.a a a 2... be any initial condition. Now choose t =.a a a 2... a k for some k. Then by theorem 3, s and t differ by at most and t is periodic. Thus we can 2 k find a periodic point as close as we like to s by taking longer and longer periods. 4. We enumerate sequences of length n of all possible n combinations of, i.e., n 2 3 seq s, s =.... A dense orbit is constructed by applying σ i to s such that it hops around almost everywhere in 2 i.e. if you choose a point in s 2 and you wish d(s, σi (s d )) < 2 N = ɛ, then i = N. As for the Doubling map, we can relate the dynamics of the tent map to the dynamics of the shift map σ on the space 2. Hence the tent map is also chaotic. In fact, we can relate most one hump maps to the dynamics of the shift map on 2 including the logistic map n+ = 4 n ( n ). We note the shift map on 2 has two other important properties:. Sensitive dependence on initial conditions. A map has sensitive dependence on initial conditions if, given any initial point, there is a point y arbitrarily close to such that the iterates starting with and y will eventually differ by an amount δ > i.e., for some n n y n δ. Choose two orbits s, r 2 such that d(s, r) 2 n. This implies that the first n symbols are identical and the first non-equal symbol occurs at n + th term. We now consider d(σ n+ (s), σ n+ (r)) = d(.a n+ a n+2...,.b n+ b n+2...) = = + k= a n++k b n++k 2 i. i= a n++i b n++i 2 i 2. Miing. Given any two arbitrarily small intervals I and J. A map is miing if there is some initial point I whose orbit enters the interval J after a number of iterations. It is this property, that allows the Taffy machine to mi the saltwater taffy. 66

67 8 Fractals Roughly speaking, a fractal is a comple geometric object that if you zoom-in, at arbitrarily small scales, the object looks self-similar to that of the whole. There has been a lot of interest in fractals because of their beauty, compleity and endless structure. Fractals seem to look approimately like natural objects from snowflakes, blood vessel networks and lightening across the sky. In order the understand fractals, we need to move away from classical geometry and develop a theory for highly irregular objects. To see an introduction to Fractals, the Chaos & Fractals website has a link to a movie eplaining some the key properties of Fractals; [Fractals]. Let us begin with an eample of a fractal. Eample 39: Cantor set In 883, Georg Cantor published a paper on the construction of a simple set that had many peculiar properties. To construct it, Cantor started with the unit interval C = [, ] and then removed the open interval ( 3, 2 3 ) i.e., the middle third of the interval. This leaves the two intervals of length 3 which are [, 3 ] and [ 2 3, ]. The remaining set is the union C = [, 3 ] [ 2 3, ]. Repeating this procedure, infinitely many times, of removing the middle third of the remaining intervals yields the Cantor set C The Cantor set has the following fractal properties:. C has structure at arbitrarily small scales. Not matter how much we zoom-in and look at C, we see that it has points separated by gaps. This is most unlike usual structures (e.g., a square) in that as you zoom-in the picture becomes more featureless (e.g, a line). 2. C has self-similar structure. As we zoom-in we see eactly the same copies of the whole. Note, most fractals don t have this eact self-similarity. 3. C has non-integer dimension. In fact, as we say see it has dimension d = log 2/ log The Cantor set also has some other interesting (non-fractal) properties C has zero length. To see this, we note that the length of C is, length of C is 2 3, the length of C 2 is 4 9 etc. Hence, the length of C k is (2/3) k. Taking the limit as k gives ( ) k 2 length of C = lim =. k 3 C has infinitely many points. See eercise sheet. So C seems to belongs somewhere in between a set of points and a line. Eample 4: Koch curve The Koch curve was first described in 94. It is constructed by starting with a line of length. The middle third of this line is then replaced by two sides of an equilateral triangle. This gives four line segments. Repeating this procedure infinitely many times leads to the Koch curve. Some interesting properties of the Koch curve: 67

68 step step step 2 Koch snowflake step 3 Koch curve Figure 32: Kock curve and Koch Snowflake.Matlab code: [Koch snowflake] and [kochstep.m]. The Koch curve has infinite length. To see this we note the initial line has length and so at the first step, a line of length 3 is replaced by two line segments with combined length 2 3. Thus the length of the line at the first step is 4 3. Similarly, at each step, the length increases by a factor of 4 3 so that the length at step k is (4/3)k. Taking the limit as k yields ( ) k 4 length of the Koch curve = lim =. k 3 So the Koch curve has infinite length but is contained within a finite area. 2. The Koch curve is self-similar. For eample, taking the left hand quarter of the curve and multiplying by 3 gives the original Koch curve again. 3. The Koch curve is continuous but nowhere differentiable. Thus, the Koch curve seems to fit somewhere between a line and a two-dimensional area. 8. Self-similarity For the purpose of this course, we will restrict our discussion of fractals to those that reside on the plane R 2. One-dimensional fractals are obvious subsets in R 2, while the discussion can be generalised to fractals in R n. Let us first recall some basic definitions of sets in R 2. We call a set in R 2 bounded if it can be enclosed by a suitably large circle and closed if it contains all of its boundary points. Two sets in R 2 are congruent if they can be made to coincide eactly by translating and rotating them appropriately within R 2. If T : R 2 R 2 is the linear operator (matri) that scales by a factor s, and if S is a set in R 2, then the set T (S) (the set of images of points in S under T ) is called a dilation of the set S if s > and a contraction of S if < s <. Definition 6 (Self-similarity) A closed and bounded subset of the plane R 2 is said to be selfsimilar if it can be epressed in the form S = S S 2 S 3 S k, (64) where S, S 2, S 3,..., S k are nonoverlapping sets, each of which is congruent to S scaled by the same factor s ( < s < ). 68

69 y Enclosing circle Bounded set y Congruent sets y y S [ T y Contraction ] [ ][ s = s y ] T (S) Eample 4: A square can be epressed as the union of four non-overlapping congruent squares. We have separated the four squares slightly so that they can be seen more easily. Each of the four smaller squares is congruent to the original square scaled by a factor of 2 i.e., a square is a self-similar set with k = 4 and s = Fractal Dimension Intuitively, we may define the dimension of a subspace of a vector space to be the number of vectors in a basis of the subspace. This definition is a special case of a more general concept called topological dimension (d T (S)). We wont give a precise definition here but informally we say a point in R 2 has topological dimension zero, a curve in R 2 has topological dimension one, a region in R 2 has topological dimension two. An alternative definition for the dimension of an arbitrary set in R n was given by the German mathematician, Feli Similarity in 99. His definition is quite complicated, but for self-similar sets it reduces to something more simple Definition 7 (Similarity dimension) The Similarity dimension of a self-similar set S of the form (64) is denoted by d sim (S) and is defined by d sim (S) = ln k ln(/s) (65) 69

70 There are a couple of things to note about the Similarity dimension The topological dimension and Similarity dimension of a set need not be the same. The Similarity dimension of a set need not be an integer. The topological dimension of a set is less that or equal to the Similarity dimension i.e., d T (S) d sim. Eample 42: Cantor set Starting from the line segment (interval) C = [, ], the first step of the construction of the Cantor set, C can be thought of as the scaling C by a scale factor 3 to get S = [, 3 ] and adding a copy of S shifted by 3 i.e., S 2 = S = [ 2 3, ]. Therefore C = S S 2 and s = 3, k = 2. We can now compute the Similarity dimension of the Cantor set d sim (C) = ln 2 ln(3).63 Eample 43: Koch curve The construction of the Koch curve can be thought of the composition of four line segments, each scaled down by a factor of 3 of the original interval i.e., s = 3, k = 4. Hence, the Similarity dimension of the Koch curve is d sim (K) = ln 4 ln(3).26, confirming our first impression that the Koch curve was somewhere between a line and a twodimensional area. We note there are other ways of measuring the dimension of objects in Euclidean space with the most popular being bo dimension, pointwise dimension and correlation dimension. These methods allow us to cope with fractals that are not strictly self-similar. We will only deal with the bo dimension. The bo dimension is based on covering an object with boes which have sides of length h, ignoring features that occur as scales less than h, and then counting the number of boes N(h) as h. h h As h is reduced, N(h) increases and we assume a power law relationship between N(h) and /h N(h) = c h d, where c is a fied constant. Taking logs and rearranging, we find d = ln(n(h)) ln(/h) ln(c) ln(/h). Now as h, the last term vanishes and we are left with the Bo dimension. Definition 8 (Bo Dimension) The Bo dimension of a set S is denoted d B and is defined by if the limit eists. d B = lim h ln(n(h)) ln(/h), 7

71 Eample 44: Bo Dimension of a non self-similar fractal We construct a non self-similar fractal as follows. A square region is divided into nine squares, and then two of the small squares are selected at random and discarded. Then the process is repeated on each of the remaining seven small squares, and so on. We pick the length of the original square to be equal to one. Then S S S 2 is covered by N = 7 squares of sides h = 3. Similarly, S 2 is covered by N = 7 2 squares of side h = ( ) 2. 3 In general, N = 7 n when h = ( n. 3) Hence, ln(n(h)) d B = lim h ln(/h) = ln(7n ) ln(3 n ) = n n ln 7 ln 3 = ln 7 ln 3. Definition 9 (Fractal) A fractal is a subset of a Euclidean space whose (Bo/Similarity)-dimension is non-integer. Eample 45: Cantor-like set in the period doubling route to chaos: Consider the logistic map n+ = r n ( n ) at r = r = ,..., corresponding to the onset of chaos. We can visualise the attractor by building it up recursively. At each period doubling bifurcation n, we have a 2 n periodic orbit. The dots in the bifurcation diagram on the left, correspond chaos r= to stable 2 n -periodic orbits with the right panel showing the corresponding values of the dots. As n, the set of points in the right panel approaches a Cantor set. However, this set isn t strictly a Cantor set since it isn t self-similar as the gaps scale by different factors depending on their location. The resulting set is called a topological Cantor set. Definition 2 (Topological Cantor Set) A closed set S is called a topological Cantor set if it satisfies the following properties: r 7

72 . S is totally disconnected i.e., S contains no connected subsets (other than single points). Informally, it means all points are separated. 2. S contains no isolated points i.e., every point in S has a neighbour arbitrarily close by. We note these properties contrast each other since the first property says points in S are spread apart while the second property says they are all packed together. Furthermore, a topological Cantor set is not required to be self-similar or dimension. The bo dimension for this topological Cantor set at the onset of chaos in the Logistic map has been found to be roughly d B.538. This is an eample of a strange attractor. 72

73 9 Strange Attractors and Repellers 9. Attractors So far we have seen three eamples of a chaotic attractor, the Logistic map, the sine map, and the Rössler system. In all of these cases, orbits of the dynamical systems converge to an attractor. Definition 2 (Attractor) An attractor is a closed set A with the following properties:. A is an invariant set: any trajectory (t) or n that starts in A stays in A for all time. 2. A attracts an open set of initial conditions: there is an open set U, containing A, such that if () U or U, then the distance from (t) or n to A tends to zero as t. The largest U is called the basin of attraction of A. 3. A is minimal: there is no proper subset of A that satisfies conditions and 2. The key thing to note here is that an attractor is bounded i.e., there are no trajectories that skip off to infinity. However, chaotic attractors ehibit sensitive dependence on initial conditions where two slightly different initial conditions diverge eponentially fast yet they remain bounded. It is the Stretching-and-folding mechanism described in 7 that allows for both the sensitive dependence on initial conditions and boundedness of trajectories. Eample 46: Rössler Attractor If we take a Poincaré section of the Chaotic attractor of the Rössler system, we slice through the all the folding of the attractor. Below, we sketch the Poincaré section of the attractor. Lorenz section flatten and stretch Poincare section fold repeat If we take a further one-dimensional slice (known as a Lorenz section) through the Poincaré section, we see an infinite set of intersections (points) with the Lorenz section. These points are separated by gaps of various sizes. This set of points is a topological Cantor set. Hence, the Rössler attractor has a non-integer (fractal) dimension. Definition 22 (Strange Attractor) A strange attractor is an attractor of a dynamical system with a non-integer (fractal) dimension. Eample 47: The Taffy machine A physical eample of a strange attractor is the Taffy machine; see the movie on the Chaos & Fractals website [Taffy machine]. The topology of this machine is very complicated. The website: [ halbert/road-show/start-here.html] describes the topological of the Taffy machine using animations, without which understanding the action of Taffy machine would be almost impossible! 73

74 The action of the Taffy machine can be described (approimately) by the D map Taffy map T : [, ] [, ] a n for <, 2 a n for < 2, n+ = T ( n ) = a n 2 for 2 < 3, 2 + (66) 2 a n for 3 < 2 T ( n ) for 2 where a = is the constant stretch rate of the Taffy machine and = 3 2 2, 2 = 2/2, 3 = 2. It can be shown that this map has -many periodic and non-periodic orbits and the periodic points are dense on the unit interval; similar to Theorem 7.. There is also a 2D map that describes the Taffy machine; see the website above. This map can be shown to have a strange attractor D maps Eample 48: The Baker map The Baker map of the unit square, y to itself is defined as ( n+, y n+ ) = B( n, y n ) := { (2n, ay n ) for n < 2 (2 n, ay n + 2 ) for 2 n, (67) where a (, 2 ] is a parameter. This map is similar to the D Doubling map in the -direction (shown in 7). If we start with the university of surrey logo on the unit square, the Baker map carries out the following steps:. the logo in the -direction is stretched to [, 2] and shrunk in y-direction by a factor of a. 2. Then the rectangle is cut in half and yielding two a rectangles, which are stacked on top of each other. For a < 2, the Baker map has a fractal attractor A that attracts all orbits. To see what this attractor looks like, we start with the unit square. This contains all possible initial conditions. Applying the Baker map to the unit square yields the following figure. We see that the set B n (S) consists of 2 n -horizontal strips of height a n. The limiting set A = B (S) is a fractal. Topologically, the fractal attractor is a Cantor set of line segments. We can calculate the bo-dimension of this fractal attractor, by looking at the covering of B n with square boes of side h = a n. Since the strips have unit length, we need a n boes to cover each strip of B n. There are 2 n strips, implying the number of boes to cover B n is N a n 2 n = (a/2) n. Plugging this into the bo-dimension formula yields ln N(h) d B = lim h ln ( ) = lim h n 74 ln[(a/2) n ] ln(a n ) = + ln 2 ln a.

75 2 stretch and flatten a cut and stack 2 a a S B(S) B 2 (S) B 3 (S) For a < 2, the dimension of the Baker attractor is non-integer and hence it is a strange attractor. One can carry out a symbolic dynamics analysis of the Baker map and prove a similar theorem as Theorem 7.. Eample 49: Smale s Horsehoe map Smale s Horseshoe (H) map crops up in the analysis of transient chaos eplored in The Smale Horsehoe map does not have a strange attractor but a strange saddle set that remains the same under the action of the map. The map is constructed as follows. We start with a unit square, stretch and flatten it. Taking this flattened and stretched square, we fold it into a horsehoe and overlay it on the unit square and slice off the overhanging parts. The process is repeated with the new square. Most points in the square are eventually removed. In the contet of transient chaos in the Lorenz equations, initial points starting in the square are stretched and folded creating chaotic dynamics. However, almost all orbits get mapped out of the square (they end up in the overhangs ) and the orbits escape to same distant part of phase space; for instance, a stable equilibrium in the Lorenz equations. The remaining set of points that remain in the square (called an invariant set) under the forward iteration of the horsehoe map can be shown to be a vertical Cantor set i.e., the set of points that remain in the square under forward iteration of the Horseshoe map Λ + = { S H k () S for k =,, 2,...}, is a Cantor set. Similarly, the invariant set under the backward iteration of the horsehoe map can be shown to be a horizontal Cantor set i.e., the set of points that remain in the square under backward iteration of the Horseshoe map is a Cantor set. The dynamics on the Horseshoe Λ = { S H k () S for k =, 2, 3,...}, Λ = Λ + Λ, 75

76 c S d flatten and stretch c d a b a b re-inject fold c d slice off the overhanging parts c d c a d b a b a b can be described using symbolic dynamics. In particular, any point () in Λ can be described by a doubly-infinite sequence of s and s S H () = (... a 2 a a a a 2...). The dynamics on the Horseshoe is topologically conjugate to the shift map on the space of two symbols. We have seen in Theorem 7., that this map has an infinite number of periodic and aperiodic points and correspondingly complicated (chaotic) dynamics. One can show that any non-degenerate crossing of stable and unstable manifolds in any dynamical system leads to a horseshoe map and correspondingly chaos. 76

77 Application: Secret communication with chaos Chaos in deterministic systems means that the system behaves unpredictably. In other words, even though we know the rules by which the system evolves, we can t predict the outcome. What use is this knowledge? One application has emerged utilising the idea of chaos to mask secret communications. Chaos sounds a lot like noise and so a transmission masked by chaos would also sound like noise, remaining undetected by an outside listener. The idea is as follows:. A chaotic signal u(t) is generated from an electronic realisation of a chaotic system (in this case the Lorenz equations) and added to a message m(t) which is then transmitted. The size of the chaotic signal is very much large than the message so as to make it sound as much as possible like noise i.e., m u. 2. The sent signal is received and feed into the same electronic chaotic system. Amazingly, this chaotic system then synchronises with the original chaotic system (at the transmitter end) producing a chaotic signal that can be subtracted from the received transmission to yield the original message! message m(t) u(t) a component of signal from chaotic circuit m<<u u(t)+m(t) transmitter receiver drives identical circuit u r (t) u(t)+m(t) u(t)+m(t)-u r (t) m(t) All works via u(t)~u r (t) Chaotic Synchronisation On the Chaos & Fractals website we have a movie demonstrating this idea; see [Sending Secret communications with Chaos]. Steven Strogatz s book Nonlinear Dynamics and Chaos, provides further eplanation. So the key step in this method of communication is the synchronisation of two chaotic systems. Intuitively, you might think this is impossible for chaotic systems since they are highly sensitive to slight changes in the initial conditions and so any errors between the transmitter and receiver would grow eponentially. However, there is a way around this... The transmitter is an electronic circuit where 3 voltages at certain points in the circuit satisfy via Kirchoff s laws u = σ(v u), v = ru v 2uw, ẇ = 5uv bw, (68) 77

78 where σ, r, b are circuit parameters. These equations are just the Lorenz equations (4) under the rescalings u =, v = y, w = 2 z. At the receiver, the received signal u(t) replaces the natural value of the receiver equations u r = σ(v r u r ), v r = ru(t) v r 2u(t)w r, ẇ r = 5u(t)v r bw r, (69) where we have written u(t) to emphasise that the receiver circuit is being driven by the chaotic signal coming from the transmitter. Parameters in both (68) and (69) are the same and are chosen such that (u, v, w) behave chaotically. The amazing result is that the receiver circuit quickly synchronises perfectly with the transmitter, starting from any initial conditions! To be clear, if we let t = (u, v, w) and r = (u r, v r, w r ), then the error between the transmitter and receiver e = t r, tends to zero i.e., e as t. This happens despite the receiver only knowing part of t (turns out synchronisation doesn t work for all parts of d... ) To show this synchronisation we need something called a Lyapunov function of e Definition 23 (Lyapunov function) Let be a dynamical system with an equilibrium at. ẋ = f(), A Lyapunov function L() is a continuously differentiable, real valued function, with the following properties. L() > for all, and L( ) =. 2. L() < for all (i.e., all trajectories flow downhill towards ). Then as t and is said to be globally asymptotically stable. Intuitively, L() looks like a bowl, which all trajectories flow down towards. Note that is L() (t) globally stable not just linearly stable. This is the magic formula needed to show that the error e between the transmitter and the receiver tends to zero! Problem: How do we find a Lyapunov function? The is no systematic method for finding Lyapunov functions (usually divine inspiration, guess work etc.). Thankfully, one does eist for our synchronisation problem... 78

79 To find a Lyapunov function L(e), we proceed as follows. Subtract (68) from (69) to get e = t r ė = σ(e 2 e ), (7) ė 2 = e 2 2u(t)e 3, (7) ė 3 = 5u(t)e 2 be 3, (72) where e = u u r etc. So u(t) is the chaotic signal, we need to get rid of it by doing the following: take (7) e 2 + (72) 4e 3 to get e 2 ė 2 + 4e 3 ė 3 = e 2 2 4be 2 3. (73) Now taking the Lyapunov function to be (guess) L(e) = ( ) 2 σ e2 + e e 2 3. (74) Observe that L(e) > for all e (due to all the square terms) and L(e) = if e = e =. Furthermore, we have L = σ e ė + [e 2 ė 2 + 4e 3 ė 3 ], = [e 2 e 2] 3 4 e2 2 4be 2 3, using equations (7) and (73),, provided b. Hence, L(e) in (74) is a Lyapunov function, so e as t implying r t (the receiver synchronises with the transmitter). In fact it does this synchronisation eponentially fast! This is important as rapid synchronisation is necessary for the desired application of transmitting messages. Summary: We have shown that the receiver circuit will synchronise with the transmitter circuit if the drive signal is u(t). However, for the signal-masking application the drive signal is actually u(t) + m(t) where m(t) is the message and u(t) m(t) is the mask. There eists no proof that the receiver will regenerate u(t) precisely (due to the mask and noise). Hence, why the received message sounds a little fuzzy. To test out this idea of sending messages using chaotic synchronisation, we choose the message to be a sine wave m(t) = A sin(ωt). The figure below shows the recovery of the sine wave message for A =., ω =. We see the error of the recovered message decays e e.8t and levels out at roughly m(t) =.sin(t) log(error) y.857t m(t) =.sin(t) original message recovered message t t Figure 33: Recovery of sine wave message. Matlab codes: [mask-lorenz.m] and [Masking]. 79

80 Matlab codes Matlab will be used throughout this course and will build on the courses [MAT25E] (Eperiment) and Numerical & Computational methods [MAT2]. For a crash course in Matlab the [Intro] pdf will help. Several other Matlab introductions can be found here: [ The following PDFs will provide help on [Iterated Maps] (Covering Cobwebs, Logistic Map and Newton s method) [Differential Equations ] (Covering first-order differential equations) [Differential Equations 2] (Covering phase planes) Matlab codes for the following Figures can be found on the Nonlinear Dynamics & Chaos website: Figure 3 [Chaotic Rabbits] Figure 7 [Sensitive dependence] Figure 8 [Periodic orbit of Logistic map] Figure 2 [pplane7.m] Figure 6 [Brusselator] Figure 7 [pplane7.m] Figure 8 [pplane7.m] Figure 9 [Transient Chaos] Figure 2 [Rössler simulations] Figure 22 [Poincaré map of Rössler system] Figure 23 [Rössler system D map] Figure 27 [Bifurcation diagram: Logistic map] Figure 28 [Intermittency] Figure 29 [Cobweb Intermittency] Figure 3 [Lyapunov eponent Logistic map] Figure 3(a) [Bifurcation diagram: Rössler] Figure 3(b) [Bifurcation diagram: Sine map] Figure 32 [Koch snowflake] Figure 33 [Masking] You will also need to download the following to run some of the above codes Lorenz system [lorenz.m] Brusslator system [brusselator.m] Rössler system [rossler.m] Poincaré map [rossler-events-poincare.m] Lorenz map [rossler-events.-lorenz.m] Koch step [kochstep.m] Masking system [mask-lorenz.m] Cobweb [cobweb.m] Figures 27,3 were actually created using the following matlab codes and imported into iphoto to enhance the image: Bifurcation diagram: Logistic map Bifurcation diagram: Rössler system Bifurcation diagram: Sine map [Logistic-diagram.m] [Rossler-diagram.m] [Sine-diagram.m] Warning: These codes take a very long time to finish! 8

81 To run these files, go to the Chaos & Fractals website [ and right-click on the files to save them all in a directory (say Figures) Now, load up Matlab and change Matlab s current directory to Figures by clicking on the button circled below. Now just type the name of the Figure you wish to re-create! References [] S. J. Hogan, Lecture notes for advanced nonlinear dynamics and chaos course. University of Bristol, Department of Engineering Mathematics, 22. [2] B. Krauskopf and H. Onsinga, Lecture notes for nonlinear dynamics and chaos course. University of Bristol, Department of Engineering Mathematics, 29. [3] J. D. Murray, Mathematical biology. I, vol. 7 of Interdisciplinary Applied Mathematics, Springer-Verlag, New York, third ed., 22. An introduction. 8

Nonlinear Oscillations and Chaos

Nonlinear Oscillations and Chaos CHAPTER 4 Nonlinear Oscillations and Chaos 4-. l l = l + d s d d l l = l + d m θ m (a) (b) (c) The unetended length of each spring is, as shown in (a). In order to attach the mass m, each spring must be

More information

Computer Problems for Taylor Series and Series Convergence

Computer Problems for Taylor Series and Series Convergence Computer Problems for Taylor Series and Series Convergence The two problems below are a set; the first should be done without a computer and the second is a computer-based follow up. 1. The drawing below

More information

One Dimensional Dynamical Systems

One Dimensional Dynamical Systems 16 CHAPTER 2 One Dimensional Dynamical Systems We begin by analyzing some dynamical systems with one-dimensional phase spaces, and in particular their bifurcations. All equations in this Chapter are scalar

More information

Chaotic motion. Phys 750 Lecture 9

Chaotic motion. Phys 750 Lecture 9 Chaotic motion Phys 750 Lecture 9 Finite-difference equations Finite difference equation approximates a differential equation as an iterative map (x n+1,v n+1 )=M[(x n,v n )] Evolution from time t =0to

More information

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations

THREE DIMENSIONAL SYSTEMS. Lecture 6: The Lorenz Equations THREE DIMENSIONAL SYSTEMS Lecture 6: The Lorenz Equations 6. The Lorenz (1963) Equations The Lorenz equations were originally derived by Saltzman (1962) as a minimalist model of thermal convection in a

More information

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325 Dynamical Systems and Chaos Part I: Theoretical Techniques Lecture 4: Discrete systems + Chaos Ilya Potapov Mathematics Department, TUT Room TD325 Discrete maps x n+1 = f(x n ) Discrete time steps. x 0

More information

Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10)

Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10) Solutions for B8b (Nonlinear Systems) Fake Past Exam (TT 10) Mason A. Porter 15/05/2010 1 Question 1 i. (6 points) Define a saddle-node bifurcation and show that the first order system dx dt = r x e x

More information

Chaotic motion. Phys 420/580 Lecture 10

Chaotic motion. Phys 420/580 Lecture 10 Chaotic motion Phys 420/580 Lecture 10 Finite-difference equations Finite difference equation approximates a differential equation as an iterative map (x n+1,v n+1 )=M[(x n,v n )] Evolution from time t

More information

STABILITY. Phase portraits and local stability

STABILITY. Phase portraits and local stability MAS271 Methods for differential equations Dr. R. Jain STABILITY Phase portraits and local stability We are interested in system of ordinary differential equations of the form ẋ = f(x, y), ẏ = g(x, y),

More information

TWO DIMENSIONAL FLOWS. Lecture 5: Limit Cycles and Bifurcations

TWO DIMENSIONAL FLOWS. Lecture 5: Limit Cycles and Bifurcations TWO DIMENSIONAL FLOWS Lecture 5: Limit Cycles and Bifurcations 5. Limit cycles A limit cycle is an isolated closed trajectory [ isolated means that neighbouring trajectories are not closed] Fig. 5.1.1

More information

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad Fundamentals of Dynamical Systems / Discrete-Time Models Dr. Dylan McNamara people.uncw.edu/ mcnamarad Dynamical systems theory Considers how systems autonomously change along time Ranges from Newtonian

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Dynamical Systems: Lecture 1 Naima Hammoud

Dynamical Systems: Lecture 1 Naima Hammoud Dynamical Systems: Lecture 1 Naima Hammoud Feb 21, 2017 What is dynamics? Dynamics is the study of systems that evolve in time What is dynamics? Dynamics is the study of systems that evolve in time a system

More information

Maps and differential equations

Maps and differential equations Maps and differential equations Marc R. Roussel November 8, 2005 Maps are algebraic rules for computing the next state of dynamical systems in discrete time. Differential equations and maps have a number

More information

8 Autonomous first order ODE

8 Autonomous first order ODE 8 Autonomous first order ODE There are different ways to approach differential equations. Prior to this lecture we mostly dealt with analytical methods, i.e., with methods that require a formula as a final

More information

APPPHYS217 Tuesday 25 May 2010

APPPHYS217 Tuesday 25 May 2010 APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

More information

PHY411 Lecture notes Part 5

PHY411 Lecture notes Part 5 PHY411 Lecture notes Part 5 Alice Quillen January 27, 2016 Contents 0.1 Introduction.................................... 1 1 Symbolic Dynamics 2 1.1 The Shift map.................................. 3 1.2

More information

Nonlinear dynamics & chaos BECS

Nonlinear dynamics & chaos BECS Nonlinear dynamics & chaos BECS-114.7151 Phase portraits Focus: nonlinear systems in two dimensions General form of a vector field on the phase plane: Vector notation: Phase portraits Solution x(t) describes

More information

Lecture 3. Dynamical Systems in Continuous Time

Lecture 3. Dynamical Systems in Continuous Time Lecture 3. Dynamical Systems in Continuous Time University of British Columbia, Vancouver Yue-Xian Li November 2, 2017 1 3.1 Exponential growth and decay A Population With Generation Overlap Consider a

More information

Question: Total. Points:

Question: Total. Points: MATH 308 May 23, 2011 Final Exam Name: ID: Question: 1 2 3 4 5 6 7 8 9 Total Points: 0 20 20 20 20 20 20 20 20 160 Score: There are 9 problems on 9 pages in this exam (not counting the cover sheet). Make

More information

B5.6 Nonlinear Systems

B5.6 Nonlinear Systems B5.6 Nonlinear Systems 5. Global Bifurcations, Homoclinic chaos, Melnikov s method Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Motivation 1.1 The problem 1.2 A

More information

A plane autonomous system is a pair of simultaneous first-order differential equations,

A plane autonomous system is a pair of simultaneous first-order differential equations, Chapter 11 Phase-Plane Techniques 11.1 Plane Autonomous Systems A plane autonomous system is a pair of simultaneous first-order differential equations, ẋ = f(x, y), ẏ = g(x, y). This system has an equilibrium

More information

NONLINEAR DYNAMICS AND CHAOS. Numerical integration. Stability analysis

NONLINEAR DYNAMICS AND CHAOS. Numerical integration. Stability analysis LECTURE 3: FLOWS NONLINEAR DYNAMICS AND CHAOS Patrick E McSharr Sstems Analsis, Modelling & Prediction Group www.eng.o.ac.uk/samp patrick@mcsharr.net Tel: +44 83 74 Numerical integration Stabilit analsis

More information

2 Discrete growth models, logistic map (Murray, Chapter 2)

2 Discrete growth models, logistic map (Murray, Chapter 2) 2 Discrete growth models, logistic map (Murray, Chapter 2) As argued in Lecture 1 the population of non-overlapping generations can be modelled as a discrete dynamical system. This is an example of an

More information

Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II.

Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II. Mathematical Foundations of Neuroscience - Lecture 7. Bifurcations II. Filip Piękniewski Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland Winter 2009/2010 Filip

More information

10 Back to planar nonlinear systems

10 Back to planar nonlinear systems 10 Back to planar nonlinear sstems 10.1 Near the equilibria Recall that I started talking about the Lotka Volterra model as a motivation to stud sstems of two first order autonomous equations of the form

More information

11 Chaos in Continuous Dynamical Systems.

11 Chaos in Continuous Dynamical Systems. 11 CHAOS IN CONTINUOUS DYNAMICAL SYSTEMS. 47 11 Chaos in Continuous Dynamical Systems. Let s consider a system of differential equations given by where x(t) : R R and f : R R. ẋ = f(x), The linearization

More information

THE SEPARATRIX FOR A SECOND ORDER ORDINARY DIFFERENTIAL EQUATION OR A 2 2 SYSTEM OF FIRST ORDER ODE WHICH ALLOWS A PHASE PLANE QUANTITATIVE ANALYSIS

THE SEPARATRIX FOR A SECOND ORDER ORDINARY DIFFERENTIAL EQUATION OR A 2 2 SYSTEM OF FIRST ORDER ODE WHICH ALLOWS A PHASE PLANE QUANTITATIVE ANALYSIS THE SEPARATRIX FOR A SECOND ORDER ORDINARY DIFFERENTIAL EQUATION OR A SYSTEM OF FIRST ORDER ODE WHICH ALLOWS A PHASE PLANE QUANTITATIVE ANALYSIS Maria P. Skhosana and Stephan V. Joubert, Tshwane University

More information

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. small angle approximation. Oscillatory solution

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. small angle approximation. Oscillatory solution Oscillatory Motion Simple pendulum: linear Hooke s Law restoring force for small angular deviations d 2 θ dt 2 = g l θ small angle approximation θ l Oscillatory solution θ(t) =θ 0 sin(ωt + φ) F with characteristic

More information

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. Oscillatory solution

Oscillatory Motion. Simple pendulum: linear Hooke s Law restoring force for small angular deviations. Oscillatory solution Oscillatory Motion Simple pendulum: linear Hooke s Law restoring force for small angular deviations d 2 θ dt 2 = g l θ θ l Oscillatory solution θ(t) =θ 0 sin(ωt + φ) F with characteristic angular frequency

More information

PHY411 Lecture notes Part 4

PHY411 Lecture notes Part 4 PHY411 Lecture notes Part 4 Alice Quillen February 1, 2016 Contents 0.1 Introduction.................................... 2 1 Bifurcations of one-dimensional dynamical systems 2 1.1 Saddle-node bifurcation.............................

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n.

More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n. More Details Fixed point of mapping is point that maps into itself, i.e., x n+1 = x n. If there are points which, after many iterations of map then fixed point called an attractor. fixed point, If λ

More information

ME 680- Spring Geometrical Analysis of 1-D Dynamical Systems

ME 680- Spring Geometrical Analysis of 1-D Dynamical Systems ME 680- Spring 2014 Geometrical Analysis of 1-D Dynamical Systems 1 Geometrical Analysis of 1-D Dynamical Systems Logistic equation: n = rn(1 n) velocity function Equilibria or fied points : initial conditions

More information

Introduction to Dynamical Systems Basic Concepts of Dynamics

Introduction to Dynamical Systems Basic Concepts of Dynamics Introduction to Dynamical Systems Basic Concepts of Dynamics A dynamical system: Has a notion of state, which contains all the information upon which the dynamical system acts. A simple set of deterministic

More information

LECTURE 8: DYNAMICAL SYSTEMS 7

LECTURE 8: DYNAMICAL SYSTEMS 7 15-382 COLLECTIVE INTELLIGENCE S18 LECTURE 8: DYNAMICAL SYSTEMS 7 INSTRUCTOR: GIANNI A. DI CARO GEOMETRIES IN THE PHASE SPACE Damped pendulum One cp in the region between two separatrix Separatrix Basin

More information

Qualitative Analysis of Tumor-Immune ODE System

Qualitative Analysis of Tumor-Immune ODE System of Tumor-Immune ODE System LG de Pillis and AE Radunskaya August 15, 2002 This work was supported in part by a grant from the WM Keck Foundation 0-0 QUALITATIVE ANALYSIS Overview 1 Simplified System of

More information

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1

Daba Meshesha Gusu and O.Chandra Sekhara Reddy 1 International Journal of Basic and Applied Sciences Vol. 4. No. 1 2015. Pp.22-27 Copyright by CRDEEP. All Rights Reserved. Full Length Research Paper Solutions of Non Linear Ordinary Differential Equations

More information

8 Ecosystem stability

8 Ecosystem stability 8 Ecosystem stability References: May [47], Strogatz [48]. In these lectures we consider models of populations, with an emphasis on the conditions for stability and instability. 8.1 Dynamics of a single

More information

Math 266: Phase Plane Portrait

Math 266: Phase Plane Portrait Math 266: Phase Plane Portrait Long Jin Purdue, Spring 2018 Review: Phase line for an autonomous equation For a single autonomous equation y = f (y) we used a phase line to illustrate the equilibrium solutions

More information

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum Reconsider the following example from last week: dx dt = x y dy dt = x2 y. We were able to determine many qualitative features

More information

= F ( x; µ) (1) where x is a 2-dimensional vector, µ is a parameter, and F :

= F ( x; µ) (1) where x is a 2-dimensional vector, µ is a parameter, and F : 1 Bifurcations Richard Bertram Department of Mathematics and Programs in Neuroscience and Molecular Biophysics Florida State University Tallahassee, Florida 32306 A bifurcation is a qualitative change

More information

Qualitative Analysis of Tumor-Immune ODE System

Qualitative Analysis of Tumor-Immune ODE System of Tumor-Immune ODE System L.G. de Pillis and A.E. Radunskaya August 15, 2002 This work was supported in part by a grant from the W.M. Keck Foundation 0-0 QUALITATIVE ANALYSIS Overview 1. Simplified System

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Coordinate Curves for Trajectories

Coordinate Curves for Trajectories 43 The material on linearizations and Jacobian matrices developed in the last chapter certainly expanded our ability to deal with nonlinear systems of differential equations Unfortunately, those tools

More information

7 Two-dimensional bifurcations

7 Two-dimensional bifurcations 7 Two-dimensional bifurcations As in one-dimensional systems: fixed points may be created, destroyed, or change stability as parameters are varied (change of topological equivalence ). In addition closed

More information

Lecture 1: A Preliminary to Nonlinear Dynamics and Chaos

Lecture 1: A Preliminary to Nonlinear Dynamics and Chaos Lecture 1: A Preliminary to Nonlinear Dynamics and Chaos Autonomous Systems A set of coupled autonomous 1st-order ODEs. Here "autonomous" means that the right hand side of the equations does not explicitly

More information

Stability of Dynamical systems

Stability of Dynamical systems Stability of Dynamical systems Stability Isolated equilibria Classification of Isolated Equilibria Attractor and Repeller Almost linear systems Jacobian Matrix Stability Consider an autonomous system u

More information

MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1

MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 OPTIONAL question 1. Below, several phase portraits are shown.

More information

Nonlinear Autonomous Systems of Differential

Nonlinear Autonomous Systems of Differential Chapter 4 Nonlinear Autonomous Systems of Differential Equations 4.0 The Phase Plane: Linear Systems 4.0.1 Introduction Consider a system of the form x = A(x), (4.0.1) where A is independent of t. Such

More information

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior CDS 11/11a: Lecture.1 Dynamic Behavior Richard M. Murray 6 October 8 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium

More information

Mathematical Model of Forced Van Der Pol s Equation

Mathematical Model of Forced Van Der Pol s Equation Mathematical Model of Forced Van Der Pol s Equation TO Tsz Lok Wallace LEE Tsz Hong Homer December 9, Abstract This work is going to analyze the Forced Van Der Pol s Equation which is used to analyze the

More information

Horizontal asymptotes

Horizontal asymptotes Roberto s Notes on Differential Calculus Chapter 1: Limits and continuity Section 5 Limits at infinity and Horizontal asymptotes What you need to know already: The concept, notation and terminology of

More information

520 Chapter 9. Nonlinear Differential Equations and Stability. dt =

520 Chapter 9. Nonlinear Differential Equations and Stability. dt = 5 Chapter 9. Nonlinear Differential Equations and Stabilit dt L dθ. g cos θ cos α Wh was the negative square root chosen in the last equation? (b) If T is the natural period of oscillation, derive the

More information

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems: Qualitative vs Quantitative analysis Complex Dynamic Systems Chiara Mocenni Department of Information Engineering and Mathematics University of Siena (mocenni@diism.unisi.it) Dynamic

More information

CDS 101/110a: Lecture 2.1 Dynamic Behavior

CDS 101/110a: Lecture 2.1 Dynamic Behavior CDS 11/11a: Lecture 2.1 Dynamic Behavior Richard M. Murray 6 October 28 Goals: Learn to use phase portraits to visualize behavior of dynamical systems Understand different types of stability for an equilibrium

More information

3. Fundamentals of Lyapunov Theory

3. Fundamentals of Lyapunov Theory Applied Nonlinear Control Nguyen an ien -.. Fundamentals of Lyapunov heory he objective of this chapter is to present Lyapunov stability theorem and illustrate its use in the analysis and the design of

More information

6.3. Nonlinear Systems of Equations

6.3. Nonlinear Systems of Equations G. NAGY ODE November,.. Nonlinear Systems of Equations Section Objective(s): Part One: Two-Dimensional Nonlinear Systems. ritical Points and Linearization. The Hartman-Grobman Theorem. Part Two: ompeting

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for DYNAMICAL SYSTEMS. COURSE CODES: TIF 155, FIM770GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for DYNAMICAL SYSTEMS COURSE CODES: TIF 155, FIM770GU, PhD Time: Place: Teachers: Allowed material: Not allowed: August 22, 2018, at 08 30 12 30 Johanneberg Jan Meibohm,

More information

Nonlinear Dynamics and Chaos Summer 2011

Nonlinear Dynamics and Chaos Summer 2011 67-717 Nonlinear Dynamics and Chaos Summer 2011 Instructor: Zoubir Benzaid Phone: 424-7354 Office: Swart 238 Office Hours: MTWR: 8:30-9:00; MTWR: 12:00-1:00 and by appointment. Course Content: This course

More information

tutorial ii: One-parameter bifurcation analysis of equilibria with matcont

tutorial ii: One-parameter bifurcation analysis of equilibria with matcont tutorial ii: One-parameter bifurcation analysis of equilibria with matcont Yu.A. Kuznetsov Department of Mathematics Utrecht University Budapestlaan 6 3508 TA, Utrecht February 13, 2018 1 This session

More information

Autonomous Systems and Stability

Autonomous Systems and Stability LECTURE 8 Autonomous Systems and Stability An autonomous system is a system of ordinary differential equations of the form 1 1 ( 1 ) 2 2 ( 1 ). ( 1 ) or, in vector notation, x 0 F (x) That is to say, an

More information

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod)

28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) 28. Pendulum phase portrait Draw the phase portrait for the pendulum (supported by an inextensible rod) θ + ω 2 sin θ = 0. Indicate the stable equilibrium points as well as the unstable equilibrium points.

More information

Nonlinear Oscillators: Free Response

Nonlinear Oscillators: Free Response 20 Nonlinear Oscillators: Free Response Tools Used in Lab 20 Pendulums To the Instructor: This lab is just an introduction to the nonlinear phase portraits, but the connection between phase portraits and

More information

MITOCW MITRES18_005S10_DiffEqnsGrowth_300k_512kb-mp4

MITOCW MITRES18_005S10_DiffEqnsGrowth_300k_512kb-mp4 MITOCW MITRES18_005S10_DiffEqnsGrowth_300k_512kb-mp4 GILBERT STRANG: OK, today is about differential equations. That's where calculus really is applied. And these will be equations that describe growth.

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems. Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics

More information

Problem Set Number 02, j/2.036j MIT (Fall 2018)

Problem Set Number 02, j/2.036j MIT (Fall 2018) Problem Set Number 0, 18.385j/.036j MIT (Fall 018) Rodolfo R. Rosales (MIT, Math. Dept., room -337, Cambridge, MA 0139) September 6, 018 Due October 4, 018. Turn it in (by 3PM) at the Math. Problem Set

More information

Why are Discrete Maps Sufficient?

Why are Discrete Maps Sufficient? Why are Discrete Maps Sufficient? Why do dynamical systems specialists study maps of the form x n+ 1 = f ( xn), (time is discrete) when much of the world around us evolves continuously, and is thus well

More information

+ y = 1 : the polynomial

+ y = 1 : the polynomial Notes on Basic Ideas of Spherical Harmonics In the representation of wavefields (solutions of the wave equation) one of the natural considerations that arise along the lines of Huygens Principle is the

More information

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics

Chapter 23. Predicting Chaos The Shift Map and Symbolic Dynamics Chapter 23 Predicting Chaos We have discussed methods for diagnosing chaos, but what about predicting the existence of chaos in a dynamical system. This is a much harder problem, and it seems that the

More information

Taylor Series and Series Convergence (Online)

Taylor Series and Series Convergence (Online) 7in 0in Felder c02_online.te V3 - February 9, 205 9:5 A.M. Page CHAPTER 2 Taylor Series and Series Convergence (Online) 2.8 Asymptotic Epansions In introductory calculus classes the statement this series

More information

Homogeneous Constant Matrix Systems, Part II

Homogeneous Constant Matrix Systems, Part II 4 Homogeneous Constant Matri Systems, Part II Let us now epand our discussions begun in the previous chapter, and consider homogeneous constant matri systems whose matrices either have comple eigenvalues

More information

2 One-dimensional models in discrete time

2 One-dimensional models in discrete time 2 One-dimensional models in discrete time So far, we have assumed that demographic events happen continuously over time and can thus be written as rates. For many biological species with overlapping generations

More information

EE222 - Spring 16 - Lecture 2 Notes 1

EE222 - Spring 16 - Lecture 2 Notes 1 EE222 - Spring 16 - Lecture 2 Notes 1 Murat Arcak January 21 2016 1 Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Essentially Nonlinear Phenomena Continued

More information

Hello everyone, Best, Josh

Hello everyone, Best, Josh Hello everyone, As promised, the chart mentioned in class about what kind of critical points you get with different types of eigenvalues are included on the following pages (The pages are an ecerpt from

More information

THE ROSENZWEIG-MACARTHUR PREDATOR-PREY MODEL

THE ROSENZWEIG-MACARTHUR PREDATOR-PREY MODEL THE ROSENZWEIG-MACARTHUR PREDATOR-PREY MODEL HAL L. SMITH* SCHOOL OF MATHEMATICAL AND STATISTICAL SCIENCES ARIZONA STATE UNIVERSITY TEMPE, AZ, USA 8587 Abstract. This is intended as lecture notes for nd

More information

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems Scott Zimmerman MATH181HM: Dynamical Systems Spring 2008 1 Introduction The Hartman-Grobman and Poincaré-Bendixon Theorems

More information

Computational Methods in Dynamical Systems and Advanced Examples

Computational Methods in Dynamical Systems and Advanced Examples and Advanced Examples Obverse and reverse of the same coin [head and tails] Jorge Galán Vioque and Emilio Freire Macías Universidad de Sevilla July 2015 Outline Lecture 1. Simulation vs Continuation. How

More information

Lecture 6. Lorenz equations and Malkus' waterwheel Some properties of the Lorenz Eq.'s Lorenz Map Towards definitions of:

Lecture 6. Lorenz equations and Malkus' waterwheel Some properties of the Lorenz Eq.'s Lorenz Map Towards definitions of: Lecture 6 Chaos Lorenz equations and Malkus' waterwheel Some properties of the Lorenz Eq.'s Lorenz Map Towards definitions of: Chaos, Attractors and strange attractors Transient chaos Lorenz Equations

More information

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems Chapter #4 Robust and Adaptive Control Systems Nonlinear Dynamics.... Linear Combination.... Equilibrium points... 3 3. Linearisation... 5 4. Limit cycles... 3 5. Bifurcations... 4 6. Stability... 6 7.

More information

Delay Coordinate Embedding

Delay Coordinate Embedding Chapter 7 Delay Coordinate Embedding Up to this point, we have known our state space explicitly. But what if we do not know it? How can we then study the dynamics is phase space? A typical case is when

More information

MAS212 Assignment #2: The damped driven pendulum

MAS212 Assignment #2: The damped driven pendulum MAS Assignment #: The damped driven pendulum Sam Dolan (January 8 Introduction In this assignment we study the motion of a rigid pendulum of length l and mass m, shown in Fig., using both analytical and

More information

Modeling Prey and Predator Populations

Modeling Prey and Predator Populations Modeling Prey and Predator Populations Alison Pool and Lydia Silva December 15, 2006 Abstract In this document, we will explore the modeling of two populations based on their relation to one another. Specifically

More information

Unit Ten Summary Introduction to Dynamical Systems and Chaos

Unit Ten Summary Introduction to Dynamical Systems and Chaos Unit Ten Summary Introduction to Dynamical Systems Dynamical Systems A dynamical system is a system that evolves in time according to a well-defined, unchanging rule. The study of dynamical systems is

More information

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Basic Concepts Paul Dawkins Table of Contents Preface... Basic Concepts... 1 Introduction... 1 Definitions... Direction Fields... 8 Final Thoughts...19 007 Paul Dawkins i http://tutorial.math.lamar.edu/terms.aspx

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

Modelling in Biology

Modelling in Biology Modelling in Biology Dr Guy-Bart Stan Department of Bioengineering 17th October 2017 Dr Guy-Bart Stan (Dept. of Bioeng.) Modelling in Biology 17th October 2017 1 / 77 1 Introduction 2 Linear models of

More information

B5.6 Nonlinear Systems

B5.6 Nonlinear Systems B5.6 Nonlinear Systems 4. Bifurcations Alain Goriely 2018 Mathematical Institute, University of Oxford Table of contents 1. Local bifurcations for vector fields 1.1 The problem 1.2 The extended centre

More information

NBA Lecture 1. Simplest bifurcations in n-dimensional ODEs. Yu.A. Kuznetsov (Utrecht University, NL) March 14, 2011

NBA Lecture 1. Simplest bifurcations in n-dimensional ODEs. Yu.A. Kuznetsov (Utrecht University, NL) March 14, 2011 NBA Lecture 1 Simplest bifurcations in n-dimensional ODEs Yu.A. Kuznetsov (Utrecht University, NL) March 14, 2011 Contents 1. Solutions and orbits: equilibria cycles connecting orbits other invariant sets

More information

Nonlinear Dynamics. Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna.

Nonlinear Dynamics. Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna. Nonlinear Dynamics Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna http://www.moreno.marzolla.name/ 2 Introduction: Dynamics of Simple Maps 3 Dynamical systems A dynamical

More information

The Existence of Chaos in the Lorenz System

The Existence of Chaos in the Lorenz System The Existence of Chaos in the Lorenz System Sheldon E. Newhouse Mathematics Department Michigan State University E. Lansing, MI 48864 joint with M. Berz, K. Makino, A. Wittig Physics, MSU Y. Zou, Math,

More information

Basic methods to solve equations

Basic methods to solve equations Roberto s Notes on Prerequisites for Calculus Chapter 1: Algebra Section 1 Basic methods to solve equations What you need to know already: How to factor an algebraic epression. What you can learn here:

More information

Chapter 6 - Ordinary Differential Equations

Chapter 6 - Ordinary Differential Equations Chapter 6 - Ordinary Differential Equations 7.1 Solving Initial-Value Problems In this chapter, we will be interested in the solution of ordinary differential equations. Ordinary differential equations

More information

Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics

Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics Applications of nonlinear ODE systems: Physics: spring-mass system, planet motion, pendulum Chemistry: mixing problems, chemical reactions Biology: ecology problem, neural conduction, epidemics Economy:

More information

Dynamical Systems. Dennis Pixton

Dynamical Systems. Dennis Pixton Dynamical Systems Version 0.2 Dennis Pixton E-mail address: dennis@math.binghamton.edu Department of Mathematical Sciences Binghamton University Copyright 2009 2010 by the author. All rights reserved.

More information

Various lecture notes for

Various lecture notes for Various lecture notes for 18385. R. R. Rosales. Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts, MA 02139. September 17, 2012 Abstract Notes, both complete and/or

More information

Applied Dynamical Systems

Applied Dynamical Systems Applied Dynamical Systems Recommended Reading: (1) Morris W. Hirsch, Stephen Smale, and Robert L. Devaney. Differential equations, dynamical systems, and an introduction to chaos. Elsevier/Academic Press,

More information