Chapter Ordinary Differential Equations: Initial Value problems (IVP) Many engineering applications can be modeled as differential equations (DE) In this book, our emphasis is about how to use computer to solve practical problems No doubt it is very important for scientists and engineers know to solve differential equations, either ordinal differential equations (ODE), or partial differential equations (PDE) using numerical methods that can be implemented on computers We start this chapter with ordinal differential equations (single os system) of initial value problems (IVP) Examples of differential equations Newton cooling model: du = c(u sur u), t > 0, u(0) = u 0 is given () This is an ordinary differential equation (ODE) with an initial value u(0) is given It is also called an initial value problem (IVP) Heat diffusion and heat equations u t = x ( β u ), a < x < b, t > 0, () x = β u, if β is a constant, (3) x u(a, t) = g (t), u(b, t) = g (t), the boundary condition, (4) u(x, 0) = u 0 (x), the initial condition (5) The equation is a partial differential equation (PDE) with initial and boundary conditions It is also called an initial and boundary value problem (IBVP) 5
6 Z Li The following equation is called one dimensional linear parabolic equation with a source; u t = x ( β u ) + α(x, t) u + γ(x, t)u + f(x, t), a < x < b, t > 0, (6) x t with a boundary condition at two ends and an initial condition The one way wave equation (first order PDE) u + c u = 0, a < x < b, t > 0, (7) t x u(a, t) = g (t), if c > 0, the boundary condition, (8) u(x, 0) = u 0 (x), the initial condition, (9) where c is called the wave speed Note that there is only one boundary condition The following equation is called one dimensional first order linear PDE u t = c u + α(x, t)u + f(x, t), a < x < b, t > 0, (0) x with a boundary condition at one end and an initial condition One-dimensional elliptic equations (steady state solution of the parabolic differential equation) x ( β u x ) + α(x) u x + γ(x)u = f(x), a < x < b, () u(a) = u a, u(b) = u b, the boundary condition () It is an ODE and is called a two-point boundary value problem (BVP) It is an elliptic type differential equation General linear second order partial differential equations: a(x, y)u xx + b(x, y)u xy + c(x, y)u yy + d(x, y)u x + e(x, y)u y + g(x, y)u = f(x, y), (3) where (x, y) Ω is a domain in x-y coordinates The PDE above can be classified as one of the following: Elliptic equation if equation b ac < 0 for all (x, y) Ω Examples include the Laplace u xx + u yy = 0, (4) and the Poisson equation u xx + u yy = f(x, y) (5)
Computational Mathematics 7 Parabolic equation if b ac = 0 for all (x, y) Ω One example is the heat equation with/without a source u t = βu xx + f(x, t) (6) An example in two space dimensions is u t = β(u xx + u yy ) + f(x, y, t) (7) Hyperbolic equation if b ac > 0 for all (x, y) Ω Example includes the two way wave equation u tt = c u xx (8) Mixed type if b ac changes sign in Ω A modeling example: simple pendulum θ L \tex{$l$} \tex{$\theta$} θ = 0 \tex{$\theta=0$} Mg \tex{$mg$} Figure : A diagram of a pendulum Consider the motion of a pendulum with mass M at the end of a rigid bar of length L Let θ(t) be the angle of the pendulum from the vertical at time t, as illustrated in Figure Ignoring the mass of the bar, the friction forces and air resistance, the differential equation for the pendulum motion becomes Ma = Mg sin θ, where g is the gravitation constant Since S = Lθ where S is the arc, we have v = L θ and a = L θ, where v denotes the velocity and a is the acceleration Consequently, we have the following non-linear boundary value: θ = g L sin θ = K sin θ, K = g L, () with two reasonable conditions
8 Z Li If we know both the initial position and the velocity, say θ(0) = α (eg α = π/4), and θ (0) = β We have an initial value problem (IVP) If we know both the initial position and the velocity, say θ(0) = α (eg α = π/4), and θ (T ) = β as observed at some later time T > 0 We have a two-point boundary value problem (BVP) Remark If θ is very small, then we can use the approximation sin θ = θ + O(θ 3 ) to get a linear differential equation θ + Kθ = 0, θ(0) = α, θ (0) = β () Remark We can change the second order ODE-IVP to a first order system of ODE- IVP Let y (t) = θ(t), y (t) = θ (t), then we have y (t) = y, (3) y (t) = K sin y, y (0) = α, y (0) = β (4) Predator-prey population model: Lotka-Volterra model Consider a system in which there are only two species: the number of predators P (t) such as sharks at time t; and the number of its prey N(t) The rate o change of the prey population is given as dn(t) = an(t) bn(t)p (t), (5) where a is the birth rate of the prey The second term means that the losses due to predation are proportional to both the number of predator and prey For the predator, the corresponding rate is dp (t) = dp (t) + cn(t)p (t), (6) where dp (t) is the effect of mortality, and cn(t)p (t) is the growth from predation We have a first order non-linear ODE system, called the Lotka-Volterra model One question is: can the two species co-exits? There are plenty of research papers on the model Can we use numerical simulations to answer this question? Note that, the right hand sides of the ODE system do not depend on the independent variable t explicitly Such a system is called autonomous If we plot P (t) versus N(t), or vise versa, these plots are called the phase plots The steady state solution of the model: If a solution to the system does depend on the time, the solution is called the steady state solution For the population model, they should satisfy dn(t) dp (t) = 0 and = 0 Thus we
Computational Mathematics 9 get an algebraic system of equations (typically for ODEs): 0 = an(t) bn(t)p (t) (7) 0 = dp (t) + cn(t)p (t) (8) We can immediately spot two solutions One is (0, 0) which is called the trivial solution The other one is (a/b, d/c) It is likely that the long term solution will approach one of the steady state solution In the first case, both species will die; while in the second case, two species will co-exit Note that we should non-dimensionalize the system to reduce the parameters (later) General first order ODE-IVP The problem in the vector form can be written as dy(t) = f(t, y(t)), t > 0 (9) y(t 0 ) = y 0, (0) where y, f, y 0 can be vectors; f and y 0 are known, while y is unknown It is a quasi-linear first order system of non-linear ordinary differential equations In component form, it can be written as dy (t) dy (t) dy m (t) = f (t, y (t), y (t),, y m (t)) = f (t, y (t), y (t),, y m (t)) = f m (t, y (t), y (t),, y m (t)) y (t 0 ) = α, y (t 0 ) = α,, y m (t 0 ) = α m High order ODEs can be transformed to a first order system of ODEs Consider y (n) = f(t, y (t), y (t),, y (n) (t)) y(t 0 ) = α, y (t 0 ) = α,, y (n) (t 0 ) = α n If we set y = y, y = y,, y n = y (n ), then we get dy (t) dy (t) dy n (t) = y, = y 3 = f(t, y (t), y (t),, y n (t)) y (t 0 ) = α, y (t 0 ) = α,, y n (t 0 ) = α n
0 Z Li Theoretical background For a quasi-linear first order ODE system, we need to know: Is there a solution? Is the solution unique? Is the solution insensitive to the data? If the answer to all the questions is yes, then the problem is called well-posed If not, then the problem is called ill well-posed Ill posed problems need special treatments and will typically not to be discussed here Theorem: If f(t, y) is Lipschitz continuous in a neighborhood of (t 0, y(t 0 )), then the initial value problem admits a unique solution in the neighborhood 3 The Euler s method for the Newton cooling model The differential equation is: dy = c(y sur y), t > 0 y(0) = y 0 is given Choose a value h for the size of every step and set t n = 0 + nh Now, one step of the Euler method from t n to t n+ = t n + h (the finite difference equation) is y n+ y n h = c(y sur y n ), y 0 = y 0, () where y n is an approximation to the solution y(t n ) = y(nh) The forward finite difference approximation to y(t) is called the forward Euler method It is also called an explicit time marching method 3 Finite difference methods We discuss how to use finite difference methods to solve ODE and PDEs While there are other different approaches, finite difference methods probably are simplest to use, implement, and easy to analysis Finite difference methods are also basis for other numerical methods A finite difference method typically involves the following steps: Generate a grid, for example (x i, t (k) ), where we want to find an approximate solution Substitute the derivatives in an ODE/PDE or an ODE/PDE system of equations with finite difference schemes The ODE/PDE then become a linear/non-linear system of algebraic equations
Computational Mathematics 3 Solve the system of algebraic equations 4 Implement and debug the computer code 5 Do the error analysis, both analytically and numerically 3 The Euler s method for ODE-IVP Given an initial value problem, (9)-(0), dy(t) = f(t, y(t)), t > 0, y(t 0 ) = y 0 If we know an approximate solution y n y(t n ) at a time t n, and a given time step size h, the one-step Euler s method is: y n+ y n = f(t n, y n ); or y n+ = y n + h f(t n, y n ) () h If we repeat this process, we can get an approximate solution at any time This is called the Euler s method In this method, we do not need to solve any equations, so it is called an explicit finite difference method Note that we use y(t n ) for the true solution to the ODE-IVP; and y n for the approximate solution Usually y(t n ) y n Ch is not zero but proportional to the step size h, so it is called a first order accurate method Note that besides Finite Difference methods, there are other methods that can be used to solve ODE/PDEs such as Finite element methods, spectral methods etc Generally finite difference are simple to use for problems defined on regular geometries, such as an interval in D, a rectangular domain in two space dimensions, and a cubic in three space dimensions 3 Some commonly used finite difference formulas Below we list three commonly used finite difference formulas to approximate the first order derivative of a function u(x) using the function values only The forward finite difference D + u(x) = u(x + h) u(x) h = u (x) + u (ξ)h () Therefore the error (absolute error) of the forward finite difference is proportional to h and the approximation is referred as a first order approximation The backward finite difference D u(x) = u(x) u(x h) h The approximation again is a first order approximation = u (x) u (ξ)h (3)
Z Li The central finite difference D 0 u(x) = u(x + h) u(x h) h = u (x) + h 6 u (ξ) (4) The error is proportional to h and the approximation is referred as a second order approximation Note that D 0 u(x) = (D +u(x) + D + u(x)) (5) 33 Approximation to high order derivatives Usually finite difference schemes for high order derivatives can be obtained from the formulas for lower order derivatives The central finite difference scheme for the second order derivative u xx is D0 u(x) = D u(x) u(x h) +D u(x) = D + h u(x h) u(x) + u(x + h) = h = u xx (x) + h u(4) (ξ) (6) So it is a second order approximation The third order derivative u xxx can be approximated, for example D 3 u(x) = D + D0u(x) u(x + h) 3u(x + h) + 3u(x) u(x h) = h 3 = u xxx (x) + h u(4) (ξ) (7) There are other approaches to construct finite difference schemes, for example, the method of un-determined coefficients, polynomial interpolations and others 4 Mathematical and practical issues of finite difference methods We know that we can use the Euler method to solve ordinary differential equations of initial value problems (IVP) However, we need to answer the following questions How accurate is the Euler method? Or more precisely, does the method converge, under what kind of conditions? Does lim h 0 y(t n ) y n 0? What is the best choice of the step size? If it is too small, then it takes to long to compute If it is too large, it may blow up, or lose accuracy Are there other better methods?
Computational Mathematics 3 4 The local truncation error In the Euler s method, we use the forward finite difference approximation to approximate the derivation which introduces an error Such an error is called the local truncation error which is defined as T n = y(tn+ ) y(t n ) f(t n, y(t n )) () h The local truncation error can be obtained as follows First, rewrite the finite difference method as a mimic to the original differential equation and move all terms to one side; Second, replace the approximation with the true solution Consistency of a finite difference method If we have lim h 0 T n = 0, then the finite difference method is called consistent Usually, the local truncation error is not zero But we wish to stress its relation with the step size h and write as T n Ch p, if h is small enough () for some constant C and integer p If so, we call the discretization is p-th order accurate Let us discuss the order of discretization of the Euler s method for a scalar ODE y (t) = f(t, y), y(0) = y 0 We have T n = y(tn+ ) y(t n ) h f(t n, y(t n )) = y(tn + h) y(t n ) h f(t n, y(t n )) = y(tn ) + hy (t n ) + h y (t n ) + y(t n ) f(t n, y(t n )) = h h y (t n ) + O(h ) Note that we have used the fact of y (t n ) = f(t n, y(t n )) from the ODE Thus the discretization of the Euler s method is first order 4 The stability condition From numerical experiments, we already know that if we take the step size h too large, then the numerical solution blows up quickly even though the Euler s method is consistent Why do we have this behavior? Consider the Euler s method for the special case y (t) = λy, y(0) =, where λ is a constant, then the Euler method is y n+ = y n + hλy n = ( + hλ)y n = ( + hλ) y n = ( + hλ) 3 y n = ( + hλ) n+ y 0 = ( + hλ) n+ Now we can see that if + hλ >, then y n will blow-up If we let λ be an arbitrary complex number, and assume that Re(λ) < 0, then the domain of + hλ is called the
4 Z Li stability region Note that, the solution to the ODE is y(t) = e λt In order y(t) is bounded, we should have Re(λ) < 0 Such a requirement is called the dynamic stability which is independent of numerical methods For general problems, λ f y (t, y) If λ = λ is a negative real number, then the step size h should be chosen as + hλ = h λ, h λ, h 0 and h λ = λ The condition is often referred as CFL condition (Courant-Fletcher-Levy) Roughly speaking, we call a numerical method is stable if y n C, where C is a constant, that is, the magnitude of the solution will not grow arbitrarily For the Euler s method, if we have h λ, then it is stable Because of this, the Euler s method is called conditional stable Theorem A stable and consistent method is convergent, that is, lim h 0 y(t n ) y n 0 43 The backward Euler s method If we use the backward finite difference formula for y (t), we get y n+ y n h = f(t n+, y n+ ); or y n+ = y n + h f(t n+, y n+ ) (3) Can you find the differences with the Euler s method? Note that, if f(t, y) is a non-linear function, then we need to solve a non-linear system of equations to get y n+ ) Example: If f(t, y) = y, then the backward Euler s method is y n+ = y n h (y n+ ) We need to solve a quadratic equations to get y n+ How about f(t, y) = sin y, we have y n+ = y n h sin y n+ We need to use a non-linear solver! Thus, the backward Euler s method is called an implicit method The local truncation error of the backward Euler s method is If we expand T n = y(tn+ ) y(t n ) h f(t n+, y(t n+ )) (4) we get y(t n ) = y(t n+ h) = y(t n+ ) hy (t n+ ) + h y (tn + ) +, T n = h y (t n+ ) + O(h )
Computational Mathematics 5 Thus the discretization of the backward Euler s method is still first order So is there any advantage of the method compared with the forward Euler s method? Let us check the stability of the backward Euler s method by setting f(t, y) = λy Then the backward Euler method is y n+ = y n + hλy n+, = y n+ = hλ yn The stability condition is / hλ For dynamically stable ODEs, we have Re(λ) < 0, thus the stability condition is always satisfied Thus the method is called unconditionally stable This is the most significant advantage of the backward Euler method Figure : A diagram of the stability regions The left: The forward Euler s method which is the inside the unit circle; The center: The backward Euler s method which is the outside the unit circle; The right: The Crank-Nicholson s method which includes the entire left plane The backward Euler s method has the largest stability region 44 The Crank-Nicholson method Can we have a second order and unconditional finite difference method? The answer is the following Crank-Nicholson method y n+ y n = ( f(t n, y n ) + f(t n+, y n+ ) ) ; h or (5) y n+ = y n + h ( f(t n, y n ) + f(t n+, y n+ ) ) (6) In the Crank-Nicholson method, we use the average of f at t n and t n+ The local truncation error is T n = y(tn+ ) y(t n ) ( f(t n, y(t n )) + f(t n+, y(t n+ )) ) (7) h If we expand the expressions above at t n+/ = t n + h, we can get T n = h 6 y(4) (t n+/ ) + O(h 4 )
6 Z Li Thus the discretization of the Crank-Nicholson method is second order It is also unconditionally stable For non-liner problem, the stability is better than the forward Euler s method, but not as good as the backward Euler s method The Crank-Nicholson method is an implicit method since we need to solve a non-linear system of equations in general A modified version is a predictor-corrector method: ȳ n+ = y n + hf(t n, y n ) predictor (8) y n+ = y n + h ( f(t n, y n ) + f(t n+, ȳ n+ ) ) corrector (9) Now the method becomes an explicit method which is conditional stable (better than the forward Euler s method) and it is second order method Such a method is one the Runge- Kutta method 5 Runge-Kutta Method The RungeKutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations These techniques were developed around 900 by the German mathematicians C Runge and M W Kutta Runge The RungeKutta methods are one-step multi-stage methods designed to have high order accuracy, adaptive time step sizes; choice for non-stiff and stiff system of ODEs These techniques were developed around 900 by the German mathematicians C Runge and M W Kutta The general Algorithm of RK-k can be written as follows: y i+ = y i + h (c 0 f 0 + c f + + c k f k ) () Example: The improved Euler s method is one of RK methods: Usually we have f 0 = f(t i, y i ), f = f(t i+, y i + hf 0 ), ( y i+ = y i + h f 0 + ) f f 0 = f(t i, y i ) f = f ( t i + α h, y i ) + hβ 0 f 0 f k = f ( t i + α k h, y i + h(β k0 f 0 + β k f + + β k,k f k ) The forward Euler, backward Euler, and the Crank-Nicholson schemes all are special cases of the RK methods One of RK method, the improved Euler s method is:
Computational Mathematics 7 i α i β ij c i 0 0 k Table : Coefficients of the RK methods i α i β ij c i 0 0 Table : The coefficients of the improved Euler s method i α i β ij c i 0 6 0 3 3 3 0 0 6 Table 3: The coefficients of a RK-4 method
8 Z Li Below is one of RK4 methods: It is equivalent to: f 0 = f(t i, y i ), f = f(t i + h, yi + h f 0), f = f(t i + h, yi + h f ), () f 3 = f(t i + h, y i + hf ) y i+ = y i + h 6 (f 0 + f + f + f 3 ) 6 Fehlberg s RKF4(5) Method Below are two Runge-Kutta methods, RK4+RK5 The purpose of the method is to choose suitable time step i α i β ij c i ĉ i 0 0 9 9 3 3 3 4 4-7 5 5 6 47 450 9 0 0 4 69 8 43 8 65 43 5 6 7 4 7 5 3 6 35 64 4 7 6 5 9 0 6 45 5 3 5 30 5 44 0 6 5 Table 4: The coefficients of a RK(4)-RK(5) method RK4: ȳ i+ = y i + h (c 0 f 0 + c f + c f + c 3 f 3 + c 4 f 4 ) RK5: y i+ = y i + h (ĉ 0 f 0 + ĉ f + ĉ f + ĉ 3 f 3 + ĉ 4 f 4 + ĉ 5 f 5 ) A typical expression for f i is ( [ f 4 = f x i + h, y i + h 7 f 0 + 7 4 f 7 5 f + 6 ]) 5 f 3 We can prove that () ȳ i y i = Ah 5 () and use this to determine an adaptive time step size h ȳ i y i > tol, use a smaller time step ȳ i y i < tol, use a larger time step
Computational Mathematics 9 7 Use Matlab ODE-Suite The MATLAB ODE suite is a collection of Matlab codes (M-files) developed by Lawrence F Shampine and Mark W Reichelt for solving ordinary differential equations The ODE-Suite is based explicit and implicit Runge-Kutta methods with adaptive time step sizes Non-stiff solvers ode45 : Solve non-stiff differential equations, medium order method ode3: Solve non-stiff differential equations, low order method ode3: Solve non-stiff differential equations, variable order method Stiff solvers ode5s: Solve stiff differential equations, variable order method ode3s: Solve stiff differential equations, low order method Options-handling functions odeset: Build or change an options argument for an ODE suite integrator odeget: Extract options from an options argument created with ODESET A graphical demo odedemo: Demo for the ODE suite integrators 7 Matlab ODE Suite usage We can use one of the ODE solvers of the ODE Suite to solve an ODE-IVP problem dy(t) = f(t, y(t)), t > 0, y(t 0 ) = y 0 using (the simplest way) [t, y] = ode45( yfun,[t0, tfinal],y0); where tfinal > t0 is the final time, y0 = y 0 is the initial condition, yfun is the Matlab function that defines the system of the ODEs, [t, y] is the solution output Below is a sample example (LotkaVolterra equation)
30 Z Li function yp = lotka(t,y) global a b c d k = length(y); yp = zeros(k,); yp() = a*y()-b*y()*y(); yp() = -c*y() + d*y()*y(); 6 5 4 4 35 The phase plot y (t) 3 0 0 5 0 5 0 5 30 35 40 45 50 t y (t) 3 5 6 y (t) 5 4 3 5 05 0 0 5 0 5 0 5 30 35 40 45 50 t 0 0 3 4 5 6 y (t) Figure 3: Plot of the prey-predator model Left, the solution y (t) and y (t) versus time t, Right: the phase plot Below is a sample script file to solve the system close all; clear all; % start everything new global a b c d a=; b=; c=05; d=05; tfinal=50; y0=[, 0] ; [t, y] = ode3s( lotka,[0, tfinal],y0); y=y(:,); y=y(:,); % extract the components subplot() % put two plots on a same plot ( rows, column) plot(t,y) subplot() plot(t,y) figure() plot(y,y) % Get the phase plot title( The phase plot ) xlabel( y_(t) ) ylabel( y_(t) ) % Try also gtext( ) After running the program, you can see that the two species can co-exist since the phase plane plot will show the so called the limit cycle, see Fig3
Computational Mathematics 3 The output data structure t y y y3 t()=t0 y(t0) y(t0) y3(t0) t() =t0+h y(t()) y(t()) y3(t()) t(n+) = tfinal y(t(n+)) y(t(n+)) y3(t(n+)) If we wish to extract the first variable y (t), we can use y = y(:, ) Similarly, y3 = y(:, 3) would be y 3 (t) at the recorded time 7 Steady state solutions (equilibrium), long term behavior, and stability Consider the following first order autonomous system of ordinary differential equations dy = f(y), t > 0, y(t 0 ) = y 0 () The system is called autonomous because the independent variable t is not explicitly appeared in the equations although the solution y(t) is still the function of time t Previous examples, including the Newton s law of cooling, the prey-predator model, are all examples of autonomous systems conditions Note that a steady state solution is independent of initial If there are solutions that are independent of t, that is, dy(t) = 0, then such solutions are called the steady state solution For a first order ODE system, the steady solution is a constant (a number) or a constant vector The steady state solutions are also called the equilibriums for physically applications, particularly in mechanics Example The steady state solution of y = y is y = 0 It is the limit of the solution y(t) = y(0)e t as t To get the steady stead solutions of (), we need to solve the algebraic system of equations f(y) = 0 ()
3 Z Li Example The steady state solution of y = c(y sur y) is y = y sur It is the limit of the solution y(t) = (y(0) y sur )e t + y sur as t Example 3 Find the steady state solutions of the non-dimensionalized prey-predator model du = u( v), (3) dv = αv( u), the steady state solutions are (0, 0) and (, ) In the first case, both species are dead while in the second case, the two species can co-exist When a dynamic system stabilizes, often the solution has a pattern The solution either reaches one of its equilibriums, or stay in the neighborhood of it How can we find the patten? We can find the pattern through the stability analysis 73 Classification of the steady state solutions of scalar ODEs Consider an autonomous system of one equation dy = f(y) (4) Let y be a steady state solution, that is, f(y ) = 0 In an neighborhood of y, we can use the Taylor expansion to get dy = f(y ) + f (y )(y y ) + O(y y ) f (y )(y y ) (5) Thus in the neighborhood of y, that is y y is small, the solution is Thus we have the following stability conclusion: y(t) = y + Ce f (y ) t + O(y y ) (6) If f (y ) < 0, then y(t) approaches y exponentially In this case, y is called a stable equilibrium If f (y ) > 0, then y(t) runs away from y exponentially In this case, y is called a unstable equilibrium In the example y = y, y = 0 is an unstable equilibrium since f (y ) = > 0 In the example y = y, y = 0 is a stable equilibrium since f (y ) = < 0 Example 4 The steady state solution of y = c(y sur y) is y = y sur We have f (y sur ) = c < 0, therefore y = y sur is a stable equilibrium as we have already known A system that can change with time
Computational Mathematics 33 74 Classification of the steady state solutions of first order system of ODEs with two variables Consider a first order system of ODEs with two variables y = f (y, y ) (7) y = f (y, y ) (8) An equilibrium (y, y ) is a pair of two numbers that satisfies f (y, y ) = 0 and f (y, y ) = 0 The two dimensional Taylor expansion at (y, y ) is f (y, y ) = f (y, y ) + f y (y y ) + f y (y y ) + = f y (y y ) + f y (y y ) + f (y, y ) = f (y, y ) + f y (y y ) + f y (y y ) + = f y (y y ) + f y (y y ) + The stability is determined by the eigenvalues of the Jacobi matrix (also called Jacobian) f f J(y, y) = y y f f (9) y y (y,y ) The eigenvalues of is the solution of det(λi J(y, y )) = 0 Let λ and λ be the eigenvalues of J(y, y ), then the stability of the equilibrium have the following cases Both λ and λ are real and positive, that is λ > 0 and λ > 0, the equilibrium is unstable Both λ and λ are real, but one of them is negative, then the equilibrium is unstable and it is called a saddle point If the eigenvalue is a complex, then they have to be in pair for real coefficients, that is, λ = α + βi and λ = α βi Typically, the solution will be oscillates in the neighborhood of the equilibrium If α > 0, the equilibrium is unstable The solution spirals away from the equilibrium If α < 0, the equilibrium is stable The solution spirals to the equilibrium
34 Z Li If α = 0, the equilibrium is neutral stable It is called a center The trajectory of the solution are closed curves near the equilibrium This indicated that the two species can co-exist forever! Figure 4: A diagram of the solution behaviors near an equilibrium Example 5 The non-dimensionalized prey-predator population model du dv = u( v), = αv( u), α > 0 The equilibriums are (0, 0) and (, ) The Jacobi matrix is: [ ] [ v u J(y, y ) = J(0, 0) = αv α( u) 0 0 α ] The eigenvalues of J(0, 0) are λ = > 0, and λ = α < 0 Thus the equilibrium (0, 0) is a unstable saddle point At the equilibrium (, ), the Jacobi matrix is: [ ] 0 J(, ) = α 0 The eigenvalues are the solution of λ + α = 0 The solution is λ = αi, λ = αi The equilibrium is a center as we expected
Computational Mathematics 35 Figure 5: A diagram of the solution behaviors in the phase plane near an equilibrium 75 Some famous examples It is interesting to try some famous examples such as the logistic model; the prey-predator model, and the chemical reaction model (Oregonator), and Lorenz s model Example 6 The logistic model is y = ry ( y ), (0) K where r and K are two positive constants The solution may have bifurcation or chaos for different values of r The discrete logistic model is y n+ = ry n ( y n ) () Figure 6 is a plot of the logistic map that shows bifurcations for the discrete logistic model Example 7 The Lorenz model is the following: x = σ(y x) () y = x(ρ z) y, (3) z = xy βz, (4) where σ, ρ, β are positive constants
36 Z Li Figure 6: A logistic map produced by Zhilin Li Figure 7: A phase plot of the Lorenz equations
Index autonomous, 3 backward Euler s method, 4 consistency, 3 convergency, 4 Crank-Nicholson method, 5 equilibrium, 3 Euler s method, explicit finite difference method, initial condition, 5 local truncation error, 3 Lotka-Volterra model, 8 Matlab ODE-Suite, 9 Newton cooling model, 5 pendulum model, 7 Runge-Kutta Method, 6 stability condition, 3 steady state solution, 3 37