Matrices, Linearization, and the Jacobi matrix

Similar documents
Matrices, Linearization, and the Jacobi matrix. y f x g y g J = dy/dt = g(x, y) Theoretical Biology, Utrecht University

Vectors, matrices, eigenvalues and eigenvectors

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics

MATH 215/255 Solutions to Additional Practice Problems April dy dt

7 Planar systems of linear ODE

Nonlinear Autonomous Systems of Differential

Problem set 7 Math 207A, Fall 2011 Solutions

Math 232, Final Test, 20 March 2007

Chapter 4: First-order differential equations. Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey

Linearization of Differential Equation Models

Differential Equations and Modeling

Lotka Volterra Predator-Prey Model with a Predating Scavenger

Calculus for the Life Sciences II Assignment 6 solutions. f(x, y) = 3π 3 cos 2x + 2 sin 3y

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

Nonlinear dynamics & chaos BECS

LS.2 Homogeneous Linear Systems with Constant Coefficients

Mathematics 1EM/1ES/1FM/1FS Notes, weeks 18-23

Continuous time population models

Some Notes on Linear Algebra

Math 266: Phase Plane Portrait

Chapter 6 Nonlinear Systems and Phenomena. Friday, November 2, 12

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

2D-Volterra-Lotka Modeling For 2 Species

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Eigenvalues and Eigenvectors

The construction and use of a phase diagram to investigate the properties of a dynamic system

Lecture 20/Lab 21: Systems of Nonlinear ODEs

Is chaos possible in 1d? - yes - no - I don t know. What is the long term behavior for the following system if x(0) = π/2?

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Lecture 3. Dynamical Systems in Continuous Time

1 The pendulum equation

Gerardo Zavala. Math 388. Predator-Prey Models

Systems Biology: Mathematics for Biologists. Kirsten ten Tusscher, Theoretical Biology, UU

Systems of Ordinary Differential Equations

Linear Planar Systems Math 246, Spring 2009, Professor David Levermore We now consider linear systems of the form

Systems of Linear ODEs

8.1 Bifurcations of Equilibria

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Lab 5: Nonlinear Systems

Qualitative Analysis of Tumor-Immune ODE System

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

STABILITY. Phase portraits and local stability

Section 9.3 Phase Plane Portraits (for Planar Systems)

Repeated Eigenvalues and Symmetric Matrices

Introduction to the Phase Plane

Copyright (c) 2006 Warren Weckesser

Math Ordinary Differential Equations

Dynamical Systems. August 13, 2013

Math 1280 Notes 4 Last section revised, 1/31, 9:30 pm.

154 Chapter 9 Hints, Answers, and Solutions The particular trajectories are highlighted in the phase portraits below.

Kinematics of fluid motion

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

QUALITATIVE ANALYSIS OF DIFFERENTIAL EQUATIONS

Nonlinear differential equations - phase plane analysis

Modelling and Mathematical Methods in Process and Chemical Engineering

Lyapunov functions and stability problems

We have two possible solutions (intersections of null-clines. dt = bv + muv = g(u, v). du = au nuv = f (u, v),

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

Chapter 7. Nonlinear Systems. 7.1 Introduction

Linear Algebra I. Ronald van Luijk, 2015

Introduction to Matrix Algebra

Modeling the Immune System W9. Ordinary Differential Equations as Macroscopic Modeling Tool

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

M469, Fall 2010, Practice Problems for the Final

Interactions. Yuan Gao. Spring Applied Mathematics University of Washington

MIT Final Exam Solutions, Spring 2017

Stable and Unstable Manifolds for Planar Dynamical Systems

+ i. cos(t) + 2 sin(t) + c 2.

Qualitative Analysis of Tumor-Immune ODE System

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 273 (51) - Final

STEP Support Programme. STEP 2 Matrices Topic Notes

Ordinary Differential Equations

7.1 Indefinite Integrals Calculus

Eigenvalues and Eigenvectors: An Introduction

Extreme Values and Positive/ Negative Definite Matrix Conditions

A plane autonomous system is a pair of simultaneous first-order differential equations,

Regression. Oscar García

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Models Involving Interactions between Predator and Prey Populations

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

ANSWERS Final Exam Math 250b, Section 2 (Professor J. M. Cushing), 15 May 2008 PART 1

Introduction to Vectors

Eigenvalues and Eigenvectors, More Direction Fields and Systems of ODEs

Mathematical Modeling I

REVIEW OF DIFFERENTIAL CALCULUS

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Math 215/255 Final Exam (Dec 2005)

Math 216 First Midterm 19 October, 2017

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

3.5 Competition Models: Principle of Competitive Exclusion

Ordinary Differential Equations (ODEs)

The Theory of Second Order Linear Differential Equations 1 Michael C. Sullivan Math Department Southern Illinois University

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Transcription:

Matrices, Linearization, and the Jacobi matrix { dx/dt f(x, y dy/dt g(x, y ( x f J y f x g y g λ 1, tr ± tr 4 det Alexander V. Panfilov, Kirsten H.W.J. ten Tusscher & Rob J. de Boer Theoretical Biology, Utrecht University, 019

i c Utrecht University, 019 Ebook publically available at: http://tbb.bio.uu.nl/rdb/books/math.pdf

ii

Contents 1 Preface 1 Vectors and Matrices 3.1 Scalars, Vectors and Matrices............................. 3. Matrices and systems of equations.......................... 6.3 Forest succession.................................... 7.4 Eigenvalues and eigenvectors............................. 9.5 Exercises........................................ 1 3 Systems of two differential equations 13 3.1 Solutions of Linear D Systems............................ 13 3. Exercises........................................ 15 4 Linear approximation of non-linear D systems 17 4.1 Partial derivatives................................... 18 4. Linearization of a system: Jacobian......................... 19 4.3 Exercises........................................ 0 5 Efficient analysis of D ODEs 1 5.1 Determinant-trace method.............................. 1 5. Graphical Jacobian.................................. 3 5.3 Plan of qualitative analysis.............................. 5 5.4 Exercises........................................ 6 6 Lotka Volterra model 7 6.1 Linearization of non-linear ODEs........................... 8 6. Exercises........................................ 30 7 Complex Numbers 31 7.1 Complex numbers................................... 31 7. Complex valued eigenvalues of ODEs......................... 33 7.3 Example: the Lotka Volterra model.......................... 35 7.4 Exercises........................................ 36 8 Answers for exercises 37 Exercises Chapter..................................... 37 Exercises Chapter 3..................................... 37 Exercises Chapter 4..................................... 38 Exercises Chapter 5..................................... 39 Exercises Chapter 6..................................... 4 Exercises Chapter 7..................................... 4

iv CONTENTS

Chapter 1 Preface This reader provides an introduction for the analysis of systems of non-linear differential equations. It starts with introducing the concept of a matrix and its eigenvalues and eigenvectors. Next we show that a linear system of differential equations can be written in a matrix notation, and that its solution can be written as a linear combination of the eigenvalues and eigenvectors of that matrix. Subsequently, we show systems of non-linear ordinary differential equations (ODEs can be linearized around steady states by taking partial derivatives. Writing these in the form of a matrix the eigenvalues can be used to determine the stability of these equilibrium points. Finally, we explain an approach to determine these partial derivatives graphically from the vector field in the phase portrait, and provide an introduction to complex numbers (as we regularly encounter complex eigenvalues in systems of ODEs. This reader was largely compiled from an earlier and much more extensive reader called Qualitative analysis of differential equations that was written by Alexander Panfilov (010, (https://arxiv.org/abs/1803.0591 at the time he was teaching mathematical biology to biology students at Utrecht University. That reader was later adapted by Kirsten ten Tusscher and by Levien van Zon. I have shortened and simplified the text, made the notation consistent with the reader Biological Modeling of Populations, added a few examples, and the full linearization of the stable spiral point of the Lotka Volterra model (Chapters 6 and 7. February 18, 019 Rob J. de Boer, Theoretical Biology, Utrecht University.

Preface

Chapter Vectors and Matrices.1 Scalars, Vectors and Matrices Variables that have only one property can be described by a single number, also called a scalar. Examples are the number of individuals in a population, N, or the concentration level of a chemical compound in a vessel c. If a variable has several properties, one uses a vector to describe it (Fig..1. Vectors can be used to describe the forces acting on an object, or the speed and direction with which an object moves, or the number of predators and prey in some area. Mathematically, vectors are written either as a row, or as a column, of numbers between brackets, and are referred to as row or column vectors. For example, in a two-dimensional plane in which the force acting on an object has an x-component V x and an y-component V y 1, Figure.1: Vectors in a dimensional plane: the scaling of a D vector, the addition of two D vectors, and finally the rotation of a D vector by multiplying it with a transformation matrix T.

4 Vectors and Matrices this force can be represented as: ( V 1 or V ( 1. The length of the force vector is given by V + 1, whereas the direction of the force vector is steps (in a certain unit in the positive x-direction (to the right and 1 step in the positive y-direction (upward. The simplest operation that can be performed on a vector is multiplication by a scalar. As the word scalar implies, this simply results in the scaling of the size of the vector, without changing its direction (Fig..1: ( ( ( 0.5V 0.5 1 0.5. 1 0.5 1 0.5 An example would be a car that keeps driving in the same direction, but halves its speed. Another operation is adding two or more vectors, for example to determine the net resultant force from the sum of all forces acting on an object. Vector addition is achieved by simply adding up the corresponding elements of the different vectors (Fig..1. Addition can only be performed on vectors that are of the same size (same number of elements: ( ( ( ( V + W 1 + 1 3 +. 1 3 1 + 3 4 A more complex operation is the rotation of a vector. Such a rotation can be obtained by multiplying the vector by a so called matrix: AV W, where V is the original vector, W is the new resulting vector, and A is the matrix performing the rotation (Fig..1. Before explaining this in more detail, let us first introduce the concept of a matrix. Mathematically speaking, matrices are written as a block of n rows and m columns of numbers, all between brackets: ( 1 4 5 A. 6 10 This particular matrix A has two rows and three columns, i.e., it has a size of 3. Matrices can be used to store and represent data sets. An example would be an experiment in which the expression of a large set of genes is measured over a range of different conditions. By using the rows to represent the different genes and the columns to represent the different conditions, each matrix element would reflect the expression level of a single gene under a particular condition. As for vectors, the simplest operation that can be performed on a matrix is the multiplication by a scalar. This is done by multiplying each individual element of the matrix with the scalar. For a general matrix this can be written as: ( ( a b λa λb λ. c d λc λd A simple example would be the rescaling of experimentally measured fluorescence. A matrix with fluorescence values should then be multiplied by a factor that translates all fluorescence levels into gene expression levels. Like vectors, two matrices A and B can be added up into a new matrix C only if they are of the same size. Both the number of rows and the number of columns should be equal. Matrix addition can then be performed by adding up the corresponding elements of the two matrices: ( ( ( ( 1 4 5 1 4 1 + 4 + 1 5 + 4 3 5 9 +. 6 10 1 3 5 + 1 6 + 3 10 + 5 3 9 15

.1 Scalars, Vectors and Matrices 5 Complex scaling of vector y v a b b 0.5b 0.5a a ( wtv ( 0.5 0 0 0.5 x v 0.5a ( 0.5b Shearing of vector parallel to x-axis y b v ( a b wtv ( 1 0. 0 1 a a+0.b Figure.: Scaling and shearing by matrix transformations of vectors. x v ( a+0.b b For two general matrices, this can be written as: ( ( ( a b x y a + x b + y +. c d z w c + z d + w For the elements in a matrix C A + B, this can be written as C ij A ij + B ij, where C ij is the value in matrix C at row i and column j. Finally, one can multiply a matrix A with a matrix B to obtain a new matrix C (if the number of columns in the first matrix is equal to the number of rows in the second matrix. Matrix multiplication is defined as the products of the rows of the first matrix with the columns of the second matrix. Thus, to find the element in row i and column j of the final matrix one needs to multiply the i th row of the first matrix by the j th column of the second matrix 1 : n C ij A ik B kj. (.1 k1 For a product of two matrices this gives: ( ( ( a b x y ax + bz ay + bw c d z w cx + dz cy + dw, (. and from this it follows that multiplication of a matrix by a column vector (simply a matrix with only one column is given by: ( ( ( a b x ax + by. (.3 c d y cx + dy Note that in Eq. (. multiplication with the first matrix A produces a transformation of the second matrix it is multiplied with. In other words, the first matrix is the transformation matrix, and the second matrix or vector is the one being transformed. Matrix multiplications are not commutative, i.e. A B B A. This means that if transformations are applied in a different order, a different outcome will be produced. A vector can be rotated and/or scaled by a matrix. Scaling a vector by a different amount in the x and y directions can also be performed by a transformation matrix (see Fig..: ( ( ( wx sx 0 x. w y 0 s y y 1 The notation in Eq. (.1 may require some explanation. The sum-sign basically says: add up a number of expressions, whereby k will be 1 in the first expression, in the second, etcetera, up to k n. In this case, n is the number of columns in the first matrix, and rows in the second. To obtain the value in matrix C, row i, column j, one multiplies what is in matrix A, row i, column 1 with the value in matrix B, row 1, column j. One then adds to that the product of what is in matrix A, row i, column with the value in matrix B, row, column j, and so forth, until one has added the values for all columns and rows.

6 Vectors and Matrices Figure.3: On the left the original pictures from On Growth and Form by D Arcy Wentworth Thompson (194 (which was first published in 1917. On the right pictures we depict pictures from a computer program of the School of Mathematics and Statistics of the University of St. Andrews in Scotland, performing similar shape transformations (http://www-history.mcs.st-andrews.ac.uk/history/miscellaneous/darcy.html. The transformation matrices used apply rotation, scaling and shearing transformations. Similarly, a matrix can be used to apply shearing to a vector. A shear force could stretch a vector is stretched in one direction, e.g., parallel to the x-axis (see Fig.., ( ( ( wx 1 k x. 0 1 y w y which results in w x x + ky and w y y. Note that as matrices can be multiplied with one another to obtain a new matrix, the multiplication of rotation, scaling and shearing matrices can result in a single matrix performing a complex transformation in one go. Also note that any point (x, y on an object can be considered as a vector v (x y. Thus, one can apply complex transformation matrices to objects (basically a collection of points, to change them into objects with different orientations, shapes and sizes. This can for example be applied to simulate or deduce the changes in shape and size that occur during development, evolution or growth in animals and plants. Indeed, the famous mathematician and biologist D Arcy Wentworth Thompson used transformation matrices to show how one could go from the shape of one fish species to that of another fish species, or from the shape of a human skull to the shape of a chimpanzee skull. He called this the theory of transformation, which he described in his 1917 book On Growth and Form (see Fig..3 and the re-edited version of the book (Thompson, 194.. Matrices and systems of equations In this course we will use matrices to write down systems of linear equations. For instance, consider { x y 5, (.4 x + y 10

.3 Forest succession 7 and write the coefficients in front of x and y in the left hand side as a square matrix: ( 1 A. 1 We also have two numbers in the right hand side which one can write as a vector, i.e., V ( 5 10. Now if one writes x and y as a vector X (x y, one can represent the system of Eq. (.4, using the definition matrix multiplication in Eq. (.3, as A X V or ( 1 1 ( x y To find the solution of this system one solves x y 5 to obtain x y 5. Substituting this into x + y 10 gives (y 5 + y 10, 4y 10 + y 10, 5y 0, and finally y 4. Substituting this into x y 5 gives x 4 5 3. Thus, the solution is (x y (3 4. Let us define the trace and the determinant as two important properties of square matrices. For the matrices that we consider in this course these are defined as: ( a b det[a] ad bc and tr[a] a + d. (.5 c d Determinants were invented to study whether a system of linear equations can be solved. It can be shown solutions, like the one obtained above, are only possible when det[a] 0. To see why this is the case consider the following general linear system { ax + by p. cx + dy q Start by solving the first equation: ax + by p, i.e., x p/a by/a. Use this solution to solve the second equation, giving cp/a cby/a + dy q, and hence (d cb a y q cp a or da cb qa cp y a a or y ( 5 10 qa cp da cb, which only has a finite solution if da cb is not equal to zero. This indeed corresponds to the determinant ad cb of the matrix defining this linear system, i.e., if the determinant equals zero, there is no solution. Note that one can also calculate the determinant of larger square matrices, but their treatment lies outside the scope of this course...3 Forest succession A simple example of a matrix model in ecology is the model developed by Henry Horn whom studied ecological succession in forests in the USA (Horn, 1975. He recorded the different species of saplings that were present under each species of tree, and assumed that each tree would be replaced by another tree proportional to these sapling densities. Taking time steps of 50 years he also estimated the specific rate of survival of each tree species. This resulted in the following table: Gray Birch Blackgum Red Maple Beech Gray Birch 0.05 0.01 0 0 Blackgum 0.36 0.57 0.14 0.01 Red Maple 0.5 0.5 0.55 0.03 Beech 0.09 0.17 0.31 0.96

8 Vectors and Matrices with columns summing up to one. Each diagonal element gives the probability that after 50 years a tree is replaced by a tree of the same species (which is the sum of its survival rate (still standing, and the rate of replacement by itself. Each off-diagonal element in this table gives the probability that a particular species is replaced by another species. For example, the fraction of Red Maple trees after 50 years would be 0.5 times the fraction of Gray Birch trees, plus 0.5 times the fraction of Blackgum trees, plus 0.55 times the fraction of Red Maples, plus 0.03 times the fraction of Beech trees. He actually measured several more species, but we give the major four species here for simplicity. This data can be written as a square matrix: 0.05 0.01 0 0 A 0.36 0.57 0.14 0.01 0.5 0.5 0.55 0.03, 0.09 0.17 0.31 0.96 and the current state of the forest as a column vector, e.g., V0 (1 0 0 0, which would be a monoculture of just Gray Birch trees. After 50 years the next state of the forest is defined by the multiplication of the initial vector by the matrix: V 50 A V 0 ( 0.05 0.36 0.5 0.09, (.6 which is a forest with 5% Gray Birch, 36% Blackgum, 50% Red Maple, and 9% Beech trees. Check for yourself that we indeed obey the normal rule of matrix multiplication, and that we obtain the first column of the matrix A because we start with a forest that is just composed of Birch trees. The next state of the forest is V 100 AV 50 ( 0.0061 0.941 0.397 0.3071, (.7 and so on. Fig..4a shows a time course that is obtained by applying this transformation 5 times, which reveals that this model describes the succession of these types of forest quite realistically (Horn, 1975. What would we now predict for the ultimate state (climax state of this forest, and would that depend on the initial state? Actually, we can already see from the last two equations that after 5000 years, i.e., 100 intervals of 50 years, the state of the forest is given by V 5000 A 100 V0, where 0.005 0.005 0.005 0.005 A 100 0.048 0.048 0.048 0.048 0.085 0.085 0.085 0.085. 0.866 0.866 0.866 0.866 Now consider an arbitrary vector V (x y z w, where w 1 x y z, and notice that A 100 x 0.005(x + y + z + w 0.005 y z 0.048(x + y + z + w 0.085(x + y + z + w 0.048 0.085, w 0.866(x + y + z + w 0.866 meaning that the succession converges into climax state. Since all columns of this matrix are identical, we obtain that the state of the forest after 5000 years is independent of the initial vector. Next we will show that this climax vector is an eigenvector of the matrix A (i.e., the eigenvector associated with the dominant eigenvalue of A.

.4 Eigenvalues and eigenvectors 9 (a (b Frequency 0.0 0. 0.4 0.6 0.8 1.0 Gray birch Blackgum Red maple Beech y 0 5 10 15 (0,1 (,1 (4,5 (14,13 0 00 400 600 800 1000 100 Time in years 0 5 10 15 x Figure.4: Repeated transformation of a vector by a matrix produces a vector with a direction closer and closer to the dominant eigenvector of matrix, i.e., the eigenvector corresponding to the largest eigenvalue. Panel (a depicts the vectors V representing the frequencies of four trees in a forest as a time plot. Starting with V 0 (1 0 0 0 the climax V (0.005 0.048 0.085 0.866 is approached after about 800 years. This climax corresponds to the dominant eigenvector of the transition matrix observed by (Horn, 1975. Panel (b depicts a few iterations with the transformation matrix A ( 1 1 with eigenvalues λ1 3 and λ 1, and corresponding eigenvectors V 1 ( 1 1 and V ( 1 1. Starting with the red vector v0 ( 0 1 we obtain the green v 1 A v 0 ( 1. Applying the same transformation again, we obtain the blue v A v 1 ( 4 5, and next the orange v 3 A v r ( 14 13. Note that the direction of these vectors approaches the dominant eigenvector V 1 ( 1 1. Panel (a was made with the R script horn.r and Panel (b with eigen.r. The latter script allows one to do more than just three iterations and rescale the length of the vectors to one..4 Eigenvalues and eigenvectors We have learned above that using matrices one can transform vectors, and change both their length and their direction. It turns out that for each particular matrix there exists a set of special vectors called its eigenvectors. Applying the matrix to an eigenvector will only change its length, and not its direction. Note that this is the same effect as multiplying the vector with a scalar, i.e., the transformation matrix in fact behaves as a scalar when applied to an eigenvector. The factor by which the eigenvector changes size when the matrix is applied to it, i.e., the scaling factor, is called the corresponding eigenvalue. Each eigenvector will have its own eigenvalue, and an n-dimensional matrix can maximally have n eigenvectors and eigenvalues. Formally, one can write this as follows Av λv. (.8 which says that for a certain vector v, application of the transformation matrix A results only in the scaling of this vector by an amount λ. Thus, v is an eigenvector and λ is the corresponding eigenvalue of transformation matrix A. Note that eigenvectors are not unique, in the sense that

10 Vectors and Matrices one can always multiply them by an arbitrary constant k to obtain another eigenvector, i.e., kav kλv or A(kv λ(kv. (.9 therefore, one can say that kv is also an eigenvector of Eq. (.8, corresponding to eigenvalue λ. What is the use of knowing eigenvalues and eigenvectors of a matrix and hence a system of equations? We have seen in the forest succession example that that eigenvectors give the principal directions of change imposed by a matrix. Eigenvalues give the amount of change in each of these directions. Knowing the eigenvector with the largest eigenvalue, one can to a large extent predict the effect of a matrix on a vector, and hence predict the behavior of the system, as it will be rotated into the direction of this dominant eigenvector (see Fig..4. Finding eigenvalues and eigenvectors is one of the most important problems in applied mathematics. It arises in many biological applications, such as population dynamics, biostatistics, bioinformatics, image processing and many other fields. In this course we will only use it for the solution of systems of differential equations, which is considered in Chapter 3. Let us now explain the general approach to solve the eigenvalues λ and eigenvectors v of a arbitrary matrix. Thus, consider the problem ( ( ( a b x x Av λ. (.10 c d y y First rewrite this into a system of two equations with three unknowns λ, x, y { ax + by λx, (.11 cx + dy λy which can be further rewritten as: { (a λx + by 0 or in matrix form cx + (d λy 0 ( ( a λ b x c d λ y ( 0. (.1 0 This system always has a solution x y 0. However, eigenvectors are defined to be non-zero vectors, and the (0 0 solution does not correspond to an eigenvector. In order to find non-zero solutions multiply the first equation by d λ, the second equation by b, and then subtract them. Multiplication gives: { { (d λ[(a λx + by] 0 (d λ(a λx + (d λby 0 or. b[cx + (d λy] 0 bcx + b(d λy 0 Subtracting the second equation from the first then gives: and given that x 0 one obtains [(d λ(a λ bc]x 0, (d λ(a λ bc λ (a + dλ + (ad cb λ trλ + det 0, (.13 which is a quadratic equation, with two possible solutions λ 1 and λ found using the classical abc -formula, λ 1, tr ± tr 4 det. (.14 Eq. (.13, or the equivalent Eq. (.14, is called the characteristic equation. In general, for an n n matrix there is a maximum of n solutions for λ.

.4 Eigenvalues and eigenvectors 11 For example, use this approach to find the eigenvalues of the following matrix ( 1, 1 with tr and det 1 4 3. Using the characteristic equation one writes λ 1, ± 4 + 1 ± 16 giving λ 1 3 and λ 1. As a next step we have to find the eigenvectors belonging to these two eigenvalues. One can do this by substituting the eigenvalues into the original equations and solving the equations for x and y. For the eigenvector corresponding to the eigenvalue λ 1 3 one obtains: { (1 3x + y 0 x + (1 3y 0 or { x + y 0 x y 0 or, { x y x y. (.15 The two equations give us the same solution: x y. This means that v 1 (1 1 is an eigenvector corresponding to the eigenvalue λ 1 3. However, we can use any other value for x and hence y as long as x y is satisfied, which we can write as ( x y k ( 1 1, (.16 where k is an arbitrary number. Eq. (.16 thus gives us all possible solutions of Eq. (.15. It also illustrates a general property of eigenvectors which we have already proven in Eq. (.9, n amely that if we multiply an eigenvector by an arbitrary number k, we will get another eigenvector of our matrix. Using this matrix to repeatedly transform an initial vector indeed turns these vectors into the direction of the dominant eigenvector (Fig..4b. Similarly we can find the eigenvector corresponding to the other eigenvalue λ 1: { (1 ( 1x + y 0 x + (1 ( 1y 0 or { x + y 0 x + y 0 or { x y x y. (.17 Hence the relation between x and y obeys x y, and for the eigenvector we could use v ( 1 1. Note, that in both cases we could have used just the first equation to find the eigenvectors. In both cases the second equation did not provide any new information. Therefore we introduce a simpler method for finding the eigenvectors of a general system (see Eq. (.11. Assume we found eigenvalues λ 1 and λ from the characteristic equation, Eq. (.13. To find the corresponding eigenvectors, one needs to substitute these eigenvalues into the matrix and solve the following system of linear equations (see Eq. (.1: { (a λ1 x + by 0 (.18 cx + (d λ 1 y 0 It is easy to check that the values x b and y a λ 1 give the solution of the first equation (a λ 1 x + by (a λ 1 ( b + b(a λ 1 0, and substituting these expressions into the second equation provides cx + (d λ 1 y cb + (d λ 1 (a λ 1 0,

1 Vectors and Matrices which is zero because (d λ 1 (a λ 1 cb 0, in accordance with the characteristic equation, Eq. (.13. Therefore x b and y a λ 1 give a solution of Eq. (.18, which is an eigenvector corresponding to the eigenvalue λ 1. Similarly we find the eigenvector corresponding to the eigenvalue λ. Note that this approach will fail when both b 0 and a λ 0 in Eq. (.1. In such cases one can use the second equation cx + (d λ 1 y 0, to find an eigenvector as x d λ 1 and y c. Summarizing, the final formulas are: ( ( ( ( b b d λ1 d λ v 1 & v a λ or v 1 a λ 1 & v c. (.19 c Applying this to the example used above, i.e., ( 1 A with eigenvalues λ 1 1 3 and λ 1, the eigenvectors can be found from Eq. (.19 as: ( ( ( b v 1 a λ 1 1 3 and v ( 1 ( 1 ( which are indeed equivalent to the eigenvectors, v 1 ( 1 1 and v ( 1 1, that were obtained above.,.5 Exercises 1. Write the following linear systems in a matrix form AX V. Find the determinant of matrix A. { x 4y 3 a. x + y 1 { ax + by 0 b. cx + dy b. Find ( eigenvalues and eigenvectors of the following matrices: 1 a. 1 ( 1 4 b. 1 1 3. Study the model of Horn (1975 with the R script horn.r. Note the notation %*% for matrix muplication! a. Will succession in this model approach the same climax state for any initial condition? b. Is that biologically reasonable?

Chapter 3 Systems of two differential equations The general form of a system of two differential equations is: { dx/dt f(x, y dy/dt g(x, y, (3.1 where x(t and y(t are unknown functions of time t, and f and g are functions of both x and y. A linear example of such a system is { dx/dt ax + by dy/dt cx + dy. (3. This is called a linear system of differential equations because the equations only contain linear terms. Depending on the values and signs of the parameters a, b, c, d these equations can describe a range of different processes. For example, consider the specific case a, b 1, c 1, d : { dx/dt x + y (3.3 dy/dt x y where x and y decay at a rate 1 per unit of time, and are converted into one another at a rate 1 (and, hence, each population has a total loss rate of per unit of time. 3.1 Solutions of Linear D Systems As for one-dimensional systems, it is possible to find analytical solutions for linear twodimensional systems Eq. (3.. Rather than deriving this solution, we will simply provide it to illustrate its analogy with the solution of linear one-dimensional systems. For one-dimensional linear systems of the form dx dt ax, we know that x(t Ceat, (3.4 is the general solution, where C is an unknown constant depending on the initial value of x (in this case C x(0. From this equation it follows that for a > 0, x approaches infinity over time, which means that x 0 is an unstable equilbrium. For a < 0, x will approach zero, meaning that x 0 is a stable equilibrium (or attractor of this equation.

14 Systems of two differential equations In an analogous manner, a two-dimensional system of the form { dx/dt ax + by dy/dt cx + dy, which in matrix notation can be written as ( ( ( ( dx/dt a b x x A dy/dt c d y y has as a general solution which in matrix notation can be written as: ( ( x(t x1 C y(t 1 y 1 x(t C 1 x 1 e λ 1t + C x e λ t y(t C 1 y 1 e λ 1t + C y e λ t,, ( e λ1t x + C e λt, (3.5 y where λ 1, λ are the eigenvalues, and v 1 ( x 1 y 1 and v ( x y the corresponding eigenvectors of the matrix A. As we saw in Chapter, the eigenvectors indicate the major directions of change of the system described by the matrix, and apparently all solutions can be written as a linear combination of the growth along the two eigenvectors. Finally, note that similar to the single unknown C depending on x(0 in the one-dimensional solution of Eq. (3.4, we here have two unknowns C 1 and C that are defined by the initial values of x and y, i.e., x(0 C 1 x 1 + C x and y(0 C 1 y 1 + C y. Similar to the single exponent a in the solution of one dimensional linear systems, the signs of the two eigenvalues determine the stability of the equilibrium point (0, 0. Note that the equilibrium point will only be stable when both exponentials in the solution converge to zero, which implies that both eigenvalues need to be smaller than zero. In case of complex valued eigenvalues (see Chapter 7, which occur for spiral (and center points, the real part of the two eigenvalues needs to be smaller than zero. Since x(t and y(t grow when λ 1, > 0 we obtain for the steady state (0, 0: a stable node when both λ 1, < 0 an unstable node when both λ 1, > 0 an (unstable saddle point when λ 1 > 0 and λ < 0 (or vice versa. When λ 1, are complex, i.e., λ 1, α ± iβ, we obtain for (0, 0: a stable spiral when the real part α < 0 an unstable spiral when the real part α > 0 a neutrally stable center point when the real part α 0. For example, derive the general solution of the system in Eq. (3.3 for the matrix ( 1 A. 1 Since tr 4 and det 4 1 3 we obtain: λ 1, 4 ± 16 1 ± 1 such that λ 1 1 and λ 3. Hence solutions tend to zero and (x, y (0, 0 is a stable node. To find the eigenvector v 1 we can now write ( ( ( b 1 1 v 1 or v a λ 1 1 1. 1

3. Exercises 15 and for v we can write: ( b v a λ We write the general solution as ( ( ( x(t 1 C y(t 1 e 1t 1 + C 1 1 e 3t ( 1 1 or. { x(t C1 e t C e 3t y(t C 1 e t + C e 3t. Note that the integration constants C 1 and C can subsequently be solved from the initial condition, i.e., x(0 C 1 C and y(0 C 1 + C. One can check this solution by substituting it into Eq. (3.3, and compare that to the derivative of the solution (not shown. 3. Exercises 1. Find the solution for the following initial value problem: ( ( ( ( dx/dt 1 x x(0 with dy/dt 5 8 y y(0 ( 3 3 Hint: Proceed by first finding the general solution. After that, substitute t 0 and x(0 3, y(0 3 to find the values of constants C 1 and C.. Two different concentrations of a solution are separated by a membrane through which the solute can diffuse. The rate at which the solute diffuses is proportional to the difference in concentrations between two solutions. The differential equations governing the process are { da/dt k V 1 (A B db/dt k, V (A B where A and B are the two concentrations, V 1 and V are the volumes of the respective compartments, and k is the rate constant at which the chemical exchanges between the two compartments. If V 1 0 liters, V 5 liters, and k 0. liters/min and if initially A 3 moles/liter and B 0, find A and B as functions of time. Hint: this exercise is similar to the previous one!

16 Systems of two differential equations

Chapter 4 Linear approximation of non-linear D systems For non-linear systems we typically do not have an analytical solution. In this chapter we will discuss that such a system can be linearized around its steady state. One can then establish the stability of that steady state by solving the linearized system. The analytical solution of the approximate linear system approaches the behavior of the original system closely as long a we remain close to an equilibrium point. Since the derivative of a function f(x at point x can be written as f ( x lim x x f(x f( x x x or f ( x lim h 0 f( x + h f( x h, we can use this expression to write a linear approximation of f(x for x close to x, i.e., f(x f( x + f ( x (x x or f(x f( x + f ( x h, where h 0. Indeed, a two-dimensional function f(x ax + b can be represented as a line in a two-dimensional plot, with the value of x on the x-axis and the value of f(x on the y-axis (see the curved red line in Fig. 4.1a. By taking the derivative of f in a particular point x, i.e., f ( x, we obtain the slope, or tangent line, of the graph of f(x in point x (see the straight blue line in Fig. 4.1a. To explicitly write that we are taking the derivative with respect to x we can also write f ( x as x f( x, i.e., for f(x ax + b we obtain that x f(x ax. The derivative can be used to approximate the curved f(x around a particular value x. From Fig. 4.1a we can read that f(x f( x + x f( x (x x, where h x x is a small step in the x-direction that we multiply with the local slope, x f( x, to approximate the required change in the vertical direction. For example, with a and b 1 and x 3 we obtain that f(3 9 + 1 19, and that x f(3 3 1. One would estimate the value of f(3.1 by writing f(3.1 f(3+ x f(3 0.1 19+1 0.1 0., while the true value f(3.1 0.. When h 0 this linear approximation should become extremely good.

18 Linear approximation of non-linear D systems f(x f( x + x f( x(x x f(x f( x x f( x x x h Figure 4.1: On the left we have the function f(x ax + b (curved red line with its local derivative in the point x depicted as x f( x a x (straight blue line. On the right we depict the function f(x, y 3x x xy, with in the point ( x, ȳ (1, 1 its partial derivatives x f(x, y 3 x y 1 and y f(x, y x, depicted by the heavy red and blue lines, respectively. We approximate the value of f(1.5, 1.5 by these partial derivatives (see the colored small plane, i.e., f(1.5, 1.5 f(1, 1 + x f(1, 10.5 + y f(1, 10.5 1 0.5 0.5 0.75. Note that the true value of f(1.5, 1.5 0.9375, and that the short vertical heavy purple line depicts the distance between the true and the approximated value. 4.1 Partial derivatives Now consider the three-dimensional function f(x, y 3x x xy plotted in Fig. 4.1b with x on the x-axis, y on the y-axis, and the value of the function f(x, y on the upward z-axis. The function value at the point ( x, ȳ (1, 1 is zero, i.e., f(1, 1 3 1 0. We can now linearize the function by differentiating it with respect to x and y, respectively, i.e., x f(x, y 3 x y and y f(x, y x, because y is treated as a constant when one differentiates with respect to x, and x is taken as a constant when we take the partial derivative with respect to y. To make this explicit one speaks of partial derivatives of the function f(x, y. These partial derivatives again define the local tangents of the curved function f(x, y. For instance, in the point ( x 1, ȳ 1 the slope in the x-direction is x f(1, 1 3 1 1 1 (see the heavy red line in Fig. 4.1b, and the slope in the y-direction is y f(1, 1 1 (see the heavy blue line in Fig. 4.1b. We can use these two tangent lines to approximate f(x, y close to the point ( x, ȳ, i.e., f(x, y f( x, ȳ + x f( x, ȳ (x x + y f( x, ȳ (y ȳ. (4.1 Because f( x, ȳ f(1, 1 0 the approximation would in this case simplify to f(x, y x f( x, ȳ (x x + y f( x, ȳ (y ȳ, (4.

4. Linearization of a system: Jacobian 19 where again we could write h x x x and h y y ȳ to define the step sizes in the x- direction and y-direction, respectively. For very small step sizes this should become a very good approximation. For instance, taking a step size h x h y 0.5 we obtain that f(x, y 1h x h y 0.5 0.5 0.75 (see the small orange plane in Fig. 4.1. This is close to the true function value f(1.5, 1.5 0.9375 (the error is depicted by the short vertical purple line in Fig. 4.1. 4. Linearization of a system: Jacobian Consider a general system of two differential equations: { dx/dt f(x, y dy/dt g(x, y, (4.3 with an equilibrium point at ( x, ȳ, i.e., f( x, ȳ 0 and g( x, ȳ 0. Using Eq. (4. we find a linear approximation of f(x, y close to the equilibrium f(x, y x f( x, ȳ (x x + y f( x, ȳ (y ȳ x f (x x + y f (y ȳ, (4.4 where x f x f( x, ȳ is an abbreviation for the partial derivative of f(x, y at the steady state ( x, ȳ. A similar approach for g(x, y yields: g(x, y x g (x x + y g (y ȳ, (4.5 where x g x g( x, ȳ is an abbreviation for the partial derivative of g(x, y at the steady state ( x, ȳ. If we now replace the right hand sides of Eq. (4.3 by their local approximations around ( x, ȳ, i.e., Eq. (4.4 and Eq. (4.5, we obtain { dx/dt x f (x x + y f (y ȳ (4.6 dy/dt x g (x x + y g (y ȳ The system of Eq. (4.6 is simpler than the original system defined by Eq. (4.3, because the partial derivatives in Eq. (4.6 are real numbers simply representing a slope at the equilibrium point ( x, ȳ. We therefore rewrite Eq. (4.6 into { dx/dt a(x x + b(y ȳ, (4.7 dy/dt c(x x + d(y ȳ where a x f, b y f, c x g and d y g. As x and ȳ are also constants, and hence their time derivatives d x/dt and dȳ/dt are zero, we can apply a trick and write dx dt dx dt d x d(x x dt dt and dy dt dy dt dȳ dt d(y ȳ dt giving { d(x x/dt a(x x + b(y ȳ d(y ȳ/dt c(x x + d(y ȳ. (4.8 Because x x and y ȳ define the distances to the steady state ( x, ȳ, we can change variables and rewrite this into the distances, h x x x and h y y ȳ, i.e., { dhx /dt ah x + bh y dh y /dt ch x + dh y. (4.9

0 Linear approximation of non-linear D systems Since this has the form of a general linear system, we know the solution ( hx (t e λt, h y (t C 1 ( x1 y 1 e λ 1t + C ( x y where λ 1 and λ are the eigenvalues of the matrix defined by the four constants in Eq. (4.9, and v 1 ( x 1 y 1 and v ( x y are the corresponding eigenvectors. In Chapter 3 we learned that this means that the distances to the steady state decline when λ 1, < 0. In all other cases small disturbances around the equilibrium will grow. Summarizing, to determine the behavior of general D system, Eq. (4.3, around a steady state, we need to determine the values of the partial derivatives in the equilibrium point, which together constitute the matrix defining the linearized system: ( ( x f J y f a b. (4.10 x g y g c d This matrix is called the Jacobian of system Eq. (4.3 in the point ( x, ȳ. This Jacobi matrix allows us to determine the eigenvalues and hence establish the type of the equilibrium of the original non-linear system. The approach we developed here for D systems is equally valid for systems composed more than two ODEs. One just obtains a larger Jacobi matrix and computes the dominant eigenvalues of that matrix to establish the stability, and/or type, of the equilibrium point. Having this solution one can see that the rate at which perturbations die out is determined by the largest eigenvalue. Hence, the return time is defined by the dominant eigenvalue: T R 1 max(λ 1, λ when λ 1, < 0 (otherwise the return time is negative and not defined. 4.3 Exercises 1. Find partial derivatives of these functions. After finding derivatives evaluate their value at the given point. a. x z and y z for z(x, y x + y 4 at x 1; y b. x z for z(x, y x(5 x y at x 3; y 4. Find a linear approximation for the function f(x, y x + y at x 1, y 1. 3. Find equilibria of the following non-linear systems, dx/dt f(x, y and dy/dt g(x, y, and { find the partial derivatives, x f, y f, x g and y g, at each equilibrium point dx/dt 4y a. dy/dt 4x x 0.5y { dx/dt 9x + y b. dy/dt x y { dx/dt x xy c. dy/dt y + y x 4. Find the Jacobian matrix of the non-trivial steady state of the Lotka-Volterra model, dr/dt ar br crn and dn/dt drn en.

Chapter 5 Efficient analysis of D ODEs In the previous chapters we learned how to determine the type and stability of an equilibrium from the interaction matrix of a linear system, or by determining the Jacobian matrix of a non-linear system in the equilibrium point. From these matrices we computed the eigenvalues to find the stability and the type of equilibrium. Here we will demonstrate an efficient method for determining the signs and types of the eigenvalues, and hence the type of equilibrium, from the coefficients of the matrix without actually computing the eigenvalues. 5.1 Determinant-trace method We have learned in the previous chapters that we can linearize the non-linear functions of any system of differential equations, e.g., { dx/dt f(x, y dy/dt g(x, y, around a steady state ( x, ȳ into a Jacobian matrix ( ( x f J y f a b x g y g c d, and that the eigenvalues of this matrix are given by λ 1, tr ± D where D tr 4 det, (5.1 where tr a + d, det ad bc, and D is the discriminant. Thus, although the original linearized system depends on four parameters, a, b, c, d, the characteristic equation of Eq. (5.1 depends only on two parameters, tr[j] and det[j], and if we know the determinant and the trace of the Jacobian, we can find the eigenvalues, and hence the type of the equilibrium ( x, ȳ. The solution of Eq. (5.1 are like the roots of any quadratic equation, and one can prove that they obey the following expressions λ 1 + λ tr[j] and λ 1 λ det[j]. (5.

Efficient analysis of D ODEs det stable spiral non stable spiral 6 3 5 4 D0 stable node center non stable node saddle 1 tr Figure 5.1: The trace and the determinant determine the type of steady state. Plotting the trace, tr, along the horizontal axis and the determinant, det, along the vertical axis, we can plot the parabola where the discriminant D 0. Saddle points (case 1 in the text corresponds to the lower half of the plane. Stable points (case 3 and 5 in the text are located in the upper left quadrant, and unstable points (case and 4 in the upper right section. The discriminant depicted by the parabola separates the real from the complex roots. For reasons of completeness we indicate center points along the positive part of the vertical axis where tr[j] 0. Such steady states are neither stable or unstable, i.e., they said to be neutrally stable, and occur as bifurcation points (in proper models. The former is true because λ 1 + λ (tr + D + tr D/ tr, and the latter can be checked by writing 1 (tr + D 1 (tr D 1 4 (tr D 1 4 (tr tr + 4 det det. Remember that the steady state is only stable when when both eigenvalues are negative. When det > 0 one knows that either both eigenvalues are negative, or that they are both positive (because λ 1 λ det[j]. Having det > 0 and tr < 0 one knows that they cannot be positive (because λ 1 + λ tr[j], and therefore that they are both negative, and hence that the steady state has to be stable. Summarizing a quick test for stability is tr[j] < 0 and det[j] > 0. Although it is typically sufficient to know whether or not a steady state is stable, we can elaborate this somewhat because the signs of the trace, determinant, and discriminant also provide information on the type of the equilibrium (see Fig. 5.1: 1. if det < 0 then D > 0, both eigenvalues are real, with λ 1, having unequal signs: saddle point.. if det > 0, tr > 0 and D > 0 the eigenvalues are real, with λ 1, > 0: unstable node. 3. if det > 0, tr < 0 and D > 0 the eigenvalues are real, with λ 1, < 0: stable node. The return time is defined as T R 1/λ max 4. if det > 0, tr > 0 and D < 0 the eigenvalues form a complex pair (see Chapter 7, λ 1, tr ± i D and having a positive trace means that the steady state is an unstable spiral, because the real part of the eigenvalues, tr/, is positive.,

5. Graphical Jacobian 3 5. if det > 0, tr < 0 and D < 0 the eigenvalues form a similar complex pair, but since the real part of the eigenvalues, tr/, now is negative, the steady state is a stable spiral point (see Chapter 7. The return time is now defined as T R 1/λ Re /tr 5. Graphical Jacobian Since the stability of the steady states just depends on the signs of the determinant and the trace of the Jacobi matrix, it is often sufficient to just known the signs of the partial derivatives that make up the Jacobian. Fortuitously, the sign of the partial derivatives (+,, 0 in the equilibrium can be obtained from the vectorfield around the steady state. The main idea can be seen in Fig. 5., where we consider an equilibrium point ( x, ȳ, at the intersection of the solid dx/dt f(x, y 0 nullcline and the dashed dy/dt g(x, y 0 nullcline. Since x f(x, y f(x, y f( x, y x x f(x + h, y f(x, y h where h x x, is a small increase in x. This is the difference in the f(x, y value between the original point ( x, ȳ and a nearby point with a slightly higher x value ( x + h, ȳ, divided by the distance h x x between these two points. Similarly, since y f(x, y [f(x, y+h f(x, y]/h, the change in f(x, y as a function of an increase in y, is the difference in f(x, y value between the original point ( x, ȳ and a nearby point with a slightly higher y value ( x, ȳ + h divided by the distance h between these two points., In other words x f J x g f( x + h, ȳ h g( x + h, ȳ h y f y g f( x, ȳ + h ( h α β g( x, ȳ + h γ δ h, (5.3 with tr[j] α + δ and det[j] αδ βγ. Obviously, this approximation will be best if the point ( x + h, ȳ and ( x, ȳ + h is close to the equilibrium point, i.e., if h is small. Since we typically only need to know the signs of the trace and the determinant, it is often sufficient to just obtain the signs of the four elements of the Jacobian. Summarizing, for the steady state ( x, ȳ we can use the point ( x + h, ȳ (slightly to the right and the point ( x, ȳ + h (slightly upward to determine the signs of the partial derivatives (see Fig. 5.: the horizontal vector field, or, gives the sign of f(x, y in those points, i.e., the horizontal arrow in the point ( x + h, ȳ determines the sign of x f( x, ȳ, and the horizontal arrow in the point ( x, ȳ + h determines the sign of y f( x, ȳ. the vertical direction vector, or, provide the sign of g(x, y, i.e., the vertical arrow in point ( x + h, ȳ determines the sign of of x g( x, ȳ, and the vertical arrow in point ( x, ȳ + h determines the sign of of y g( x, ȳ. In Fig. 5.a and c we see that the leftward horizontal arrow ( in a point to the right of the equilibrium, (x + h, y, tells us that x f(x, y α < 0, and that the horizontal leftward arrow ( in a point above the steady state tells us that y f( x, ȳ β < 0. Similarly, the vertical upward arrow ( in the point ( x + h, ȳ tells us that x g( x, ȳ γ > 0, whereas the downward vertical arrow ( in the point ( x, ȳ + h tells us that g ( x, ȳ δ < 0.

4 Efficient analysis of D ODEs y y y (x,y+h (x,y x (x,y (x+h,y x (x,y x a b c Figure 5.: The graphical Jacobian method. Panel (a shows the nullclines, vectorfield, and the location of the equilibrium ( x, ȳ. Panel (b shows two reference points, one located slightly to the right ( x + h, ȳ, and one located just above ( x, ȳ + h the equilibrium. These are used to compute the partial derivatives: Panel (c shows the vector field in these two reference points, i.e., (, and (,, respectively. y 0 y y y x 0 x 0 x 0 y 0 y 0 x x x a b c Figure 5.3: A few examples of applying the graphical Jacobian. Note that a point to the right of the equilibrium would lie on a nullcline if that nullcline is perfectly horizontal, and hence that f x 0 (if this were the dx/dt f(x, y 0 nullcline or that g x 0 (if this would be the dy/dt g(x, y 0 nullcline. Similarly, if a nullcline is exactly vertical we obtain that either f y 0 or that g y 0 (depending on which nullcline the point lands. Consider the examples of this graphical approach for the nullclines presented in Fig. 5.3, where the shifted points ( x + h, ȳ and ( x, ȳ + ( h are marked by bullets. In Panel α β a. we find the following Jacobian J, with tr α δ < 0 and det αδ+βγ > 0, γ δ implying that this is a stable node or stable spiral. We cannot tell the difference because that depends on the discriminant, ( i.e., the exact values of the partial derivatives, α 0 b. we find the Jacobian J, with tr α δ < 0, det αδ > 0, and hence 0 δ D tr 4 det (α + δ 4αδ (α δ > 0. Thus, this is a stable node (which one can also see from the vector field ( in Fig. 5.3b, α 0 c. we find the Jacobian J, with tr α δ having an unknown sign, but because 0 δ det αδ < 0, we know that this is a saddle point (which one can also derive from the vector field in Fig. 5.3c.

5.3 Plan of qualitative analysis 5 5.3 Plan of qualitative analysis We summarize all of the above by formulating a plan to qualitatively study systems of two differential equations: { dx/dt f(x, y. (5.4 dy/dt g(x, y The main aim is to plot the phase portrait and determine the stability (and possibly the type of the equilibrium points, such that we can predict the dynamics of the system. Start with sketching nullclines and the vector field: 1. Decide which variables can most easily be solved from the f(x, y 0 and g(x, y 0 expressions, and plot that variable on the vertical axis (and the other on the horizontal axis. Sketch the f(x, y and g(x, y 0 nullclines in this phase space.. Choose a point in an extreme region (e.g., both variables big, both small, or an asymmetric point on the x, y plane, and find the local horizontal arrow from f(x, y. Plot the corresponding arrow, i.e., if f(x, y > 0 and if f(x, y < 0. 3. Check all regions of the phase space and swap the horizontal arrow when crossing this nullcline. 4. Do the same to find the direction of the vertical arrows, i.e., take an extreme point to find the local vertical arrow from g(x, y, and swap this arrow when crossing the dy/dt 0 nullcline. 5. Study the vector field in the four different regions surrounding each equilibrium point, and see if this provides enough information on stability of the equilibrium. This can be done when the steady state is a stable node, an unstable node, or a saddle point. Should the vector field be insufficient to determine the stability of an equilibrium, which probably means it is a spiral point, we determine the graphical Jacobian of the equilibrium point: 1. For each equilibrium point ( x, ȳ choose two points. One located slightly to the right, ( x+h, ȳ, and one slightly above, ( x, ȳ + h, the equilibrium. Find the signs of the Jacobian from the local vector field at these points.. Compute the sign of the trace and determinant of this Jacobi matrix, check whether tr < 0 and det > 0, and see if this identifies the type of equilibrium (see Fig. 5.1. Finally, if the stability of the equilibrium can not be determined from the graphical Jacobian, we need to determine the full Jacobian by taking the partial derivatives of f(x, y and g(x, y at the steady state ( x, ȳ ( ( x f J y f a b, (5.5 x g y g c d to solve the eigenvalues of this matrix, i.e., λ 1, tr ± D where D tr 4 det, (5.6 and tr a + d and det ad bc. When both λ 1 < 0 and λ < 0 the steady state ( x, ȳ is stable. Independent of how we determined the type and stability of the equilibria, we use our knowledge of the type and stability of each equilibrium the vector field and nullclines, to draw a local phase portrait with trajectories around each equilibrium point. Finally, connecting the different local phase portraits into a global phase portrait we get an idea of the separatrices and the basins of attraction of the attractors.

6 Efficient analysis of D ODEs 5.4 Exercises 1. Find the type and stability of the equilibria of the following linear (or linearized systems using the determinant-trace method: { { { dx/dt 3x + y dx/dt x + y dx/dt x + y a. dy/dt 0x + 6y b. dy/dt x 10y c. dy/dt 5x y. Consider the following model dx dt x(1 y and dy dt y x, assuming x 0 and y 0. a. Find all equilibria of the system. b. Find the general expression for the Jacobian of this system. c. Determine the type of each equilibrium using the determinant-trace method, and sketch the qualitative local phase portraits around each equilibrium point. Try to connect them into a global phase portrait. 3. Study the model of the previous question again using the graphical Jacobian approach: a. Sketch the vector field for the system using nullclines. b. Find type and stability of equilibria using the graphical Jacobian. c. Compare your results to the previous question. 4. On the webpage we provide the Grind script linear.r which encodes Eq. (3.. a. Find values of a, b, c and d such that the origin is a stable node, an unstable node, a saddle point, and a spiral point. Help yourself by reducing this problem to find the proper trace, determinant, and discriminant of this matrix. b. Draw the nullclines with a phase portrait for each situation (plane(portraittrue.

Chapter 6 Lotka Volterra model Using the famous Lotka Volterra model as an example we review these methods for analyzing systems of non-linear differential equations. The Lotka-Volterra predator prey model can be written as: dr dt ar br crn and dn drn en, (6.1 dt where a, b, c, d, and e are positive constant parameters, and R and N are the prey and predator densities. A first step is to sketch nullclines (0-isoclines in phase space. A nullcline is the set of points (R, N for which the corresponding population remains constant. Thus, the R-nullcline is the set of points at which dr/dt 0. Setting dr/dt 0 and dn/dt 0 in Eq. (6.1 one finds R 0, N a br c and N 0, R e d, (6. for the prey nullclines and the predator nullclines, respectively. These four lines are depicted in Fig. 6.1. For biological reasons we only consider the positive quadrant. a c N e d R a b Figure 6.1: The phase space of the Lotka Volterra model with the vector field indicated by the arrows. We here consider the case where the nullclines intersect in a non-trivial equilibrium point, i.e., e/d < a/b.

8 Lotka Volterra model A second step is to determine the vector field. Not knowing the parameter values, one considers extreme points in phase space. In the neighborhood of the point (R, N (0, 0, for example, one can neglect the quadratic br, crn, and drn terms, such that dr dt ar, dn dt en. (6.3 Since the former is strictly positive, and the latter strictly negative, we assign (, to the local direction of the vector field (see Fig. 6.1. This means that dr/dt > 0 below the R-nullcline, i.e., we sketch arrows to the right, and that at the left hand side of the N-nullclines dn/dt < 0, i.e., we sketch arrows pointing to the bottom. At the R and N-nullclines the arrows are vertical and horizontal, respectively. The derivatives switch sign, and the arrows switch their direction, when one passes a nullcline. Nullclines therefore separate the phase space into regions where the derivatives have the same sign. An important step is to determine the steady states of the system. A steady state, or equilibrium, is defined as dr/dt dn/dt 0. Graphically steady states are the intersects of the nullclines. Analytically, one finds ( e da eb (R, N (0, 0, (R, N (a/b, 0 and (R, N, (6.4 d dc as the three steady states of this system. Note that the non-trivial steady state only exists when da eb dc > 0. We will further analyze the model for the parameter condition that all three steady states exist, i.e., we consider da > eb. Finally, one has to determine the nature of the steady states. For the steady states (0, 0 and (a/b, 0 one can read from the vector field that they are saddle points. Around (a/b, 0 the vertical component is the unstable direction, and around (0, 0 the horizontal component is the unstable direction. This is not so simple for the non-trivial point. Because there is no stable and unstable direction in the vector field the non-trivial steady state cannot be a saddle point, and it has to be a node or a spiral point. To determine its stability one can check for local feedback. Increasing R in the steady state makes dr/dt < 0, and increasing N in the steady state keeps dn/dt 0 because one lands exactly on the dn/dt 0 nullcline (see Fig. 6.1. Because locally there is no positive feedback we expect that the non-trivial steady state is stable. 6.1 Linearization of non-linear ODEs In Chapter 4 we learned formal technique to determine the stability of steady states is to linearize the differential equations by taking partial derivatives. The linearized system can be solved explicitly, and can be used to predict the long-term behavior in the immediate neighborhood of the steady state. This linearized system is written as a Jacobi matrix, and the local feedback we studied above was basically a trick to obtain the trace elements of the Jacobi matrix; if these are positive the steady state tends to be unstable. To linearize a -dimensional function f(x, y around an arbitrary point ( x, ȳ one adopts Eq. (4.1, i.e., f(x, y f( x, ȳ + x f( x, ȳ (x x + y f( x, ȳ (y ȳ. (6.5 or f(x, y f( x + h x, ȳ + h y f( x, ȳ + x f( x, ȳ h x + y f( x, ȳ h y, (6.6

6.1 Linearization of non-linear ODEs 9 where h x x x and h y y ȳ define a small distance to the steady state. Thus, for the Lotka Volterra model one defines the functions dr dt ar br crn f(r, N and dn dt drn en g(r, N. (6.7 Next one rewrites the model into two new variables h R and h N that define the distance to the steady state ( R, N. Now one approximates the system f( and g( with d( R + h R dt dh R dt f( R + h R, N + h N f( R, N + R f( R, N h R + N f( R, N h N, (6.8 where h R R R and h N N N. and d( N + h N dt dh N dt g( R + h R, N + h N g( R, N + R g( R, N h R + N g( R, N h N, (6.9 where h R and h N define the distance to ( R, N. Because ( R, N is an equilibrium point, i.e., f( R, N 0 and g( R, N 0, this simplifies into and dh R dt dh N dt R f( R, N h R + N f( R, N h N, (6.10 R g( R, N h R + N g( R, N h N. (6.11 This linearized system describes the growth of a small disturbance (h R, h N around the steady state ( R, N. If the solutions of this system approach (h R, h N (0, 0 the steady state is locally stable. The four partial derivatives in the steady state form the so-called Jacobi-matrix, ( a b R c N c R J d N d R, (6.1 e and the general solution of the linear system has the form ( ( ( hr (t R1 c h N (t 1 e λ1t R + c N e λt, (6.13 1 N where λ 1, are the eigenvalues, and v 1 ( R 1 N 1 and v ( R N the corresponding eigenvectors of the Jacobian. One can see that the steady state is stable if, and only if, λ 1, < 0. Whenever both eigenvalues are negative small disturbances will die out. To determine the stability of the three steady states, one therefore only needs to know the eigenvalues of the Jacobian (or its trace and determinant. For ( R, N (0, 0 one finds ( a 0 J. (6.14 0 e Because the matrix is in the diagonal form, one can immediately see that the eigenvalues are λ 1 a and λ e. Because λ 1 > 0 the steady state is unstable, i.e., a saddle point. For ( R, N (a/b, 0 one finds ( a ac J b da eb. (6.15 0 b The eigenvalues are λ 1 a and λ (da eb/b. Because of the requirement da > eb one knows λ > 0, and hence that the point is not stable. (Note that ( R, N (a/b, 0 will

30 Lotka Volterra model be stable when da < eb: to see what happens sketch the nullclines for that situation. For ( R, N ( e d, da eb dc one obtains ( be J d ce ( d b R c R da eb d N. (6.16 0 One finds c 0 trj b R < 0 and det J cd R N > 0, (6.17 which tells us that the steady state is stable. Note that we never filled in numerical values for the parameters in this analysis. Graphical Jacobian Consider the vector field around the steady state of some system dx/dt f(x, y and dy/dt g(x, y. Around the steady state ( x, ȳ in the phase space (x, y the sign of dx/dt is given by the horizontal arrows, i.e., the horizontal component of the vector field. The sign of x f can therefore be determined by making a small step to the right, i.e., in the x direction, and reading the sign of dx/dt from the vector field. Similarly, a small step upwards gives the effect of y on dx/dt, i.e., gives y f, and the sign can be read from the vertical arrow of the vector field. Repeating this for x g and y g, while replacing x, y with R, N, one finds around the steady state (0, 0 in Fig. 6.1: ( ( x f J y f α 0, (6.18 x g y g 0 β where α and β are positive constants. Because det(j αβ < 0 the steady state is a saddle point (see Fig. 5.1. For the steady state without predators one finds ( α β J, (6.19 0 γ Because det(j αγ < 0 the equilibrium is a saddle point. For the non-trivial steady state one finds ( α β J, (6.0 γ 0 and because tr(j α < 0 and det(j βγ > 0 the equilibrium is stable. This graphical method is also explained in the book of Hastings (1997. 6. Exercises Determine the stability of the non-trivial equilibrium point of this Lotka Volterra model after setting b 0, i.e., after disallowing for a carrying capacity of the prey.

Chapter 7 Complex Numbers 7.1 Complex numbers Consider a general quadratic equation with roots given by the abc -formula λ 1, b ± b 4ac a aλ + bλ + c 0, (7.1 b ± D a where D b 4ac. The value D is called the discriminant. What happens with this equation if D < 0? Does the equation still have roots in this case? You have probably learned that a quadratic equation cannot be solved if D < 0. It is indeed true that the equation has no real solutions in this case, because the squareroot of a negative number does not exist in any real sense. However, we shall see that even if the solutions are not real numbers, we can still perform calculations with them. In order to do this, so-called complex numbers have been invented, which allow for a solution of Eq. (7.1, even if D < 0. Let us define the basic complex number i as: i 1 or equivalently i 1. (7. To see how this works, consider the equation λ 3, which does not have a real solution. Given that i 1 we can rewrite this into λ 1 3 i 3 or λ 1, ±i 3. (7.3 Here i is the basic complex number, which is similar to 1 for real numbers. In general, the equation λ a, has solutions λ 1, ±ai. As calculations with complex numbers may in the end deliver an outcome that is a real number, the definition i 1 can be very useful in many different real life applications. A general complex number z can be written as z α + iβ, where α is called the real part and iβ is called the imaginary part of the complex number z. Now we can solve Eq. (7.1 for the case D < 0. If D is negative, then D must be positive, and we can write D 1 D i D, and λ 1, b ± i D a. (7.4

3 Complex Numbers (a The complex plane (b imaginary part complex number z real part Figure 7.1: Panel (a: complex numbers represented as points or vectors on a complex plane. This is called an Argand diagram. Panel (b: the Mandelbrot set created by the series z i z i 1 + c, where z 0 0 and c is a point in the Argand diagram (defined by the horizontal and vertical axes of Panel (b. The color indicates the size of z n after a fixed number of iterations i 1,,..., n. This image was taken from Wikipedia (which also provides further explanation. For example, solve the equation λ + λ + 10 0: λ 1, ± 4 4 10 ± 36 ± 6i. (7.5 In other words, λ 1 1 + 3i and λ 1 3i. We see that the solution of this equation forms a complex pair with real part 1, and imaginary part 3. Complex numbers that have identical real parts and imaginary parts with opposite signs, are called complex conjugates. The number z a ib is called the complex conjugate to the number z 1 a + ib. Roots of a quadratic equation with a negative discriminant (D < 0 are always complex conjugates to each other (see Eq. (7.4. Complex numbers can be plotted in a complex plane. This is a graph in which the horizontal axis is used for the real part, and the vertical axis is used for the imaginary part of the complex number (see Fig. 7.1a. Note that this is very similar to the depiction of a vector (x y, when x and y are real valued numbers on a real plane. In other words, you can think of a complex number z α + iβ as a vector (α β on a complex plane. Indeed it turns out that scaling, adding and multiplication of complex numbers follow the same rules as for vectors. Adding two complex numbers is a simple matter of adding their real parts together, and then adding their imaginary parts together. For example, with z 1 3 + 10i and z 5 + 4i, z 1 + z (3 + 10i + ( 5 + 4i 3 5 + 10i + 4i + 14i. Note that this is the same as adding two expressions containing a variable (e.g. (3+10x +( 5+ 4x. Moreover, if you think of the complex numbers as vectors on a complex plane, addition works the same as it would for normal vectors: (3 10 + ( 5 4 ( 14. As was already stated, scaling, adding and multiplication of complex numbers follow the same rules as defined for vectors. We only need to remember is that i 1. Thus, multiplication by a real number (a scalar results in the multiplication of both the real and imaginary parts by this number. For example, if z 1 3 + 10i then 10z 1 10(3 + 10i 30 + 100i. Multiplication of two complex numbers is the same as multiplying two expressions that contain a variable (e.g. (a + bx(c + dx. In the case of complex numbers, the real and imaginary part of the first number should both be multiplied by both the real and imaginary part of the second