# 7 Planar systems of linear ODE

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution can always be constructed Linear systems considered as mathematical models of biological processes are of limited use; however, such models still can be used to describe the dynamics of the system during the stages when the interactions between the elements of the system can be disregarded Moreover, the analysis of the linear systems is a necessary step in analysis of a local behavior of nonlinear systems (linearization of the system in a neighborhood of an equilibrium) 71 General theory The linear system of first order ODE with constant coefficients on the plane has the form ẋ 1 = a 11 x 1 + a 12 x 2, ẋ 2 = a 21 x 2 + a 22 x 2, (1) or, in vector notations where x = (x 1, x 2 ), and A = (a ij ) 2 2 matrix with real entries: ẋ = Ax, x R 2, (2) [ ] a11 a A = 12 a 21 a 22 Additionally to (2), consider also the initial condition To present the solution to (2) (3), I first prove x(0) = x 0 (3) Proposition 1 Initial value problem (2) (3) is equivalent to the solution of the integral equation x(t) = x 0 + t 0 Ax(ξ) dξ (4) Proof Assume that x(t) solves (4) Then, by direct inspection we have that x(0) = x 0 Moreover, since the right-hand side is given by the integral this implies that x C (1), therefore, we can take the derivative to find (2) Now other way around, by assuming that x(t) solves (2) (3), integrating (2), and evaluating the constant of integration, we recover (4) Now we can use (4) to approximate the solution to (2) (3) by the method of successive iterations The first approximation is of course the initial condition x 0 : x 1 (t) = x 0 + t 0 Ax 0 dξ = x 0 + Ax 0 t Math 484/684: Mathematical modeling of biological processes by Artem Novozhilov Spring

2 Next, Or, continuing the process, x 2 (t) = x 0 + x n (t) = t 0 Ax 1 (ξ) dξ = x 0 + Ax 0 t + A2 x 0 t 2 2 ( I + At + A2 t An t n ) x 0 1! 2! n! Here I is an identity matrix Please note that what is inside the parenthesis in the last formula is actually a matrix However, the expression is so suggestive, given that you remember the Taylor series for the ordinary exponential function, exp(t) = e t = 1 + t 1! + t2 2! + + tn n! +, that it is impossible to resist the temptation to make Definition 2 The matrix exponent of the matrix A is defined as the infinite series exp(a) = e A = I + A 1! + A2 2! + + An n! + (5) This definition suggests that the solution to the integral equation (4), and, therefore, to the IVP (2) (3), in of the form x(t) = e At x 0, note that here we have to write x 0 on the right to make sure all the operations are well defined Before continuing the analysis of the linear system and actually proving that indeed the solution is given by the presented formula, we have to make sure that the given definition makes sense, ie, all the usual series (there are four of them if matrix A is 2 2) converge Proposition 3 Series (5) converges absolutely Proof Let a ij a For the product AA = A 2 we have that the elements of this product are bonded by 2a 2 Similarly, for A k it is 2 k 1 a k = 2 k a k /2 Since we have that k a k = 1 k! 2 e2a, therefore the series in (5) converges absolutely to the matrix denoted e A k=0 Now it is quite straightforward to prove that exp(at) solves (2) Proposition 4 d dt eat = Ae At Proof Since the series converges absolutely, we are allowed to differentiate the series termwise: d dt A k t k = k! k=0 k=0 d A k t k dt k! = k=1 A k t k 1 (k 1)! = AeAt 2

3 The last proposition indicates that the matrix exponent has some properties similar to the usual exponent Here is a good example to be careful when dealing with the matrix exponent Example 5 Consider two matrices, I claim that Let me prove it I have therefore Similarly, Therefore, Now We have Therefore, C 2 = which proves that e A+B e A e B In the last expression I used A = [ ] 0 1, B = 0 0 e A+B e A e B A 2 = 0, e A = I + A = e B = I + B = e A e B = C = A + B = [ ] 1 0 = I, C 3 = 0 1 [ ] [ ] [ ] [ ] [ ] [ ] 0 1 = C, C 4 = 1 0 [ e C 1 1 = 2! + 1 4! ! + 1 5! + ] ! 1 4! ! + 1 4! + = [ ] 1 0 = I 0 1 [ ] cos 1 sin 1, sin 1 cos 1 cos t = ( 1) k t2k (2k)!, k=0 sin t = ( 1) k 1 t 2k 1 (2k 1)! k=1 Proposition 6 If [A, B] = AB BA = 0, ie, if matrices A and B commute, then e (A+B)t = e At e Bt 3

4 Proof For the matrices that commute the binomial theorem holds: Since, by the Cauchy product (A + B) n = ( i=0 n k=0 ) A i i! j=0 ( ) n A n k B k = n! k B j = j! n=0 i+j=n i+j=n A i i! A i i! B j j! B j j!, one has e A e B = n=0 i+j=n A i i! B j j! (A + B) n = n! n=0 = e A+B We can do these formal manipulations since all the series in the question converge absolutely As an important corollary of the last proposition we have Corollary 7 For the matrix exponent e A(t 1+t 2 ) = e At 1 e At 2, and e At e At = I Proof For the first note that At 1 and At 2 commute For the second put t 1 = t and t 2 = t Now we can actually prove that solution to (2) (3) exists, unique, and defined for all < t < Theorem 8 Solution to the IVP problem (2) (3) is unique and given by Proof First, due to Proposition (4), x(t; x 0 ) = e At x 0, < t < (6) d dt eat x 0 = Ae At x 0, and also e A0 x 0 = Ix 0 = x 0, which proves that (6) is a solution To prove that it is unique, consider any solution x(t) of the IVP and put y(t) = e At x(t) We have ẏ(t) = Ae At x(t) + e At ẋ(t) = Ae At x(t) + e At Ax = 0 4

5 In the expression above, I used two (non-obvious) facts: First, that the usual product rule is still true for the derivative of the product of two matrices, and that e At A = Ae At, ie, that matrices A and e At commute You should fill in the details of the proofs of these statements Hence we have that y(t) = C, therefore, by setting t = 0, we find y(0) = x 0, which implies that any solution is given by x(t) = e At x 0 Since the last formulae is defined for any t, therefore, the solution is defined for < t < 72 Three main matrices and their phase portraits Consider the solutions to (2) and their phase portraits for three matrices: [ ] [ ] [ ] λ1 0 λ 1 α β A 1 =, A 0 λ 2 =, A 2 0 λ 3 =, β α where λ 1, λ 2, λ, α, β are real numbers A 1 We have, using the definition of the matrix exponent that [ ] e A1t e λ 1 t 0 = 0 e λ 2t, therefore the general solution to (2) is given (which can be actually obtained directly, by noting that the equations in the system are decoupled) [ ] [ e λ 1 t 0 e λ 1 x(t; x 0 ) = 0 e λ 2t x 0 = t x 0 ] 1 e λ2t x 0 2 If λ 1 0 and λ 2 0 then we have only one isolated equilibrium ˆx = (0, 0), the phase curves can be found as solutions to the first order ODE dx 2 dx 1 = λ 2x 2 λ 1 x 1, which is separable equation, and the directions on the orbits are easily determined by the signs of λ 1 and λ 2 (ie, if λ 1 < 0 then x 1 (t) 0 as t ) Consider a specific example with 0 < λ 1 < λ 2 In this case we have that all the orbits are parabolas, and the direction is from the origin because both lambdas are positive The only tricky part here is to determine which axis the orbits approach as t, this can be done by looking at the explicit equations for the orbits (you should do it) or by noting that when t e λ1t e λ2t and therefore x 1 component dominates in a small enough neighborhood of (0, 0) (see the figure) The obtained phase portrait is called topological node ( topological is often dropped), and since the arrows point from the origin, it is unstable (we ll come back to the discussion of the stability) 5

6 v 2 x2 v 1 x 1 Figure 1: Unstable node in system (2) with matrix A 1 Vectors v 1 and v 2 indicate the directions of x 1 -axis and x 2 -axis respectively As another example consider the case when λ 2 < 0 < λ 1 In this case (prove it) the orbits are actually hyperbolas on (x 1, x 2 ) plane, and the directions on them can be identifies by noting that on x 1 -axis the movement is from the origin, and on x 2 -axis it is to the origin Such phase portrait is called saddle All the orbits leave a neighborhood of the origin for both t ± except for five special orbits: first, this is of course the origin itself, second two orbits on x 1 -axis that actually approach the origin if t, and two orbits on x 2 -axis, which approach the origin if t The orbits on x 1 -axis form the unstable manifold of ˆx = (0, 0), and the orbits on x 2 -axis form the stable manifold of ˆx These orbits are also called the saddle s separatrices (singular, separatrix) v 2 x2 v 1 x 1 Figure 2: Saddle in system (2) with matrix A 1 Vectors v 1 and v 2 indicate the directions of x 1 -axis and x 2 -axis respectively There are several other cases, which need to be analyzed, let me list them all: 6

7 0 < λ 1 < λ 2 : unstable node (shown in figure) 0 < λ 2 < λ 1 : unstable node 0 < λ 1 = λ 2 : unstable node λ 1 < λ 2 < 0: stable node λ 2 < λ 1 < 0: stable node λ 1 = λ 2 < 0: stable node λ 1 < 0 < λ 2 : saddle λ 2 < 0 < λ 1 : saddle (shown in figure) You should sketch the phase portraits for each of these cases Also keep in mind that for now I exclude cases when one or both λ s are zero A 2 To find e A 2t I will use the fact that A 2 = [ ] λ λ [ ] and these two matrices commute The matrix exponent for the first matrix was found in the previous point, and for the second it is readily seen that [ ] [ ] t exp t = Therefore, [ ] e A2t = e λt 1 t 0 1 Assume that λ < 0 (the cases λ = 0 and λ > 0) left as exercises x(t; x 0 ) 0 as t, moreover, dx 2 0 dx 1 Now, first, we see that as t, therefore the orbits should be tangent to x 1 -axis The figure is given below, the phase portrait is sometimes called the improper stable node A 3 Here I will use as two commuting matrices to find A 3 = [ ] α α [ e A3t = e αt cos βt sin βt [ 0 ] β β 0 ] sin βt cos βt Hence the flow of the system (2) is given by [ x(t; x 0 ) = e A3t x 0 = e αt cos βt sin βt ] sin βt x cos βt 0 7

8 x2 v 1 x 1 Figure 3: Improper node in system (2) with matrix A 2 Vector v 1 indicates the direction of x 1 -axis To determine the phase portrait observe that if α < 0 then all the solutions will approach the origin, and if α > 0, they will go away from origin We also have components of e A 3t which are periodic functions of t, which finally gives us the whole picture: if α < 0 and β > 0 then the orbits are the spirals approaching origin clockwise, if α > 0 and β > 0 then the orbits are spiral unwinding from the origin clockwise, and if α = 0 then the orbits are closed curves Here is an example for α < 0 and β < 0, this phase portrait is called the stable focus (or spiral) x2 x 1 Figure 4: Stable focus in system (2) with matrix A 3 if we take α = 0 and β < 0 then the phase portrait is composed of the closed curves and called the center (recall the Volterra Lotka model) See the figure To determine the direction on the orbits, we can use the original vector field For example, in the case α = 0 β < 0 we have that for any point x 1 = 0 and x 2 > 0 the derivative of x 2 is negative, and therefore the direction is counter clockwise 8

9 x2 x 1 Figure 5: Center in system (2) with matrix A 3 73 A little bit of linear algebra So why actually I spend so much time on studying three quite simple particular matrices A 1, A 2, A 3? It is because the following theorem is true: Theorem 9 Let A be a 2 2 real matrix Then there exists a real invertible 2 2 matrix P such that P 1 AP = J, where matrix J is one of the following three matrices in Jordan normal form [ ] [ ] [ ] λ1 0 λ 1 α β (a), (b), (c) 0 λ 2 0 λ β α Before turning to the proof of this theorem, let us discuss how this theorem can be used for the analysis of a general system (2): ẋ = Ax, x R 2 (2) Consider a linear change of variables x = P y for new unknown vector y We have or therefore, P ẏ = AP y, ẏ = P 1 AP y = Jy, y(t; y 0 ) = e Jt y 0, where we already know how to calculate e Jt Returning to the original variable, we find that x(t; x 0 ) = P e Jt P 1 x 0, full solution to the original problem Moreover, we also showed that e At = P e Jt P 1, 9

10 which is often used to calculate the matrix exponent Finally, the phase portraits for x will be similar to those of y, since the linear invertible transformation amounts to scaling, rotation, and reflection, as we learn in the course of linear algebra The only question is how to find this linear change P For this, let us recall Definition 10 A nonzero vector v is called an eigenvector of matrix A if where λ is called the corresponding eigenvalue Av = λv, Generally eigenvalues and eigenvectors can be complex To find the eigenvalues, we need to find the roots of the characteristic polynomial P (λ) = det(a λi), which is of the second degree if A is a 2 2 matrix Once the eigenvalues are found, the corresponding eigenvectors can be found as solutions to the homogeneous system of linear algebraic equations (A λi)v = 0 Remember that eigenvectors are not unique and determined up to a multiplicative constant Now we are in a position to prove Theorem 9 The proof also gives the way to find the transformation P Proof of Theorem 9 Since the characteristic polynomial has degree two, it may have either two real root, two complex conjugate roots, or one real root multiplicity two Assume first that we have either two distinct real roots λ 1 R λ 2 R with the corresponding eigenvectors v 1 R 2 and v 2 R 2 or a real root λ R multiplicity two which has two linearly independent eigenvectors v 1 R 2 and v 2 R 2 Now consider the matrix P, whose columns are exactly v 1 and v 2, I will use the notation P = (v 1 v 2 ) It is known that the eigenvector corresponding to distinct eigenvalues are linearly independent, hence P is invertible (can you prove this fact?) Now just note that AP = (Av 1 Av 2 ) = (λ 1 v 1 λ 2 v 2 ) = P J, which proves the theorem for case (a) For case (b), assume that there is one real root of characteristic polynomial with the eigenvector v 1 Then there is another vector v 2, which satisfies (A λi)v 2 = v 1, which is linearly independent of v 1 (can you prove it?) Now take P = (v 1 v 2 ), and where J as in (b) AP = (λv 1 v 1 + λv 2 ) = P J, 10

11 Finally, in case (c) we have λ 1,2 = α±iβ as eigenvalues and the corresponding eigenvectors v 1 ±iv 2, where v 1, v 2 are real nonzero vectors Let me take P = (v 1 v 2 ) Since we have Now A(v 1 + iv 2 ) = (α + iβ)(v 1 + iv 2 ), Av 1 = αv 1 βv 2, Av 2 = αv 2 + βv 1 AP = (αv 1 βv 2 βv 1 + αv 2 ) = P J, where J as in (c) The only missing point is to prove that v 1 and v 2 are linearly independent, which is left as an exercise Example 11 Consider system (2) with A = [ ] We find that the eigenvalues and eigenvectors are λ 1 = 2, v 1 = ( 1, 1), λ 2 = 2, v 2 = (3, 1) Therefore, the transformation P here is P = [ ] 1 3, 1 1 and The solution to system J = P 1 AP = ẏ = Jy, [ ] where y = P 1 x is straightforward and given by [ ] e 2t 0 y(t; x 0 ) = 0 e 2t and its phase portrait has the structure of a saddle (see the figure) To see how actually the phase portrait looks in x coordinate, consider solution for x, which takes the form x = P y = (v 1 e λ 1t v 2 e λ 2t )y 0 = C 1 v 1 e λ 1t + C 2 v 2 e λ 2t, where I use C 1, C 2 for arbitrary constants Note that x changing along the straight line with the direction v 1 if C 2 = 0, and along the straight line v 2 when C 1 = 0 The directions of the flow on these lines coincide with the directions of the flow on the y-axes for the system with the matrix in the Jordan normal form (see the figure) This is how we can see the phase portrait for the linear system of two autonomous ODE of the first order Not taking into account the cases when one or both eigenvalues are zero, we therefore saw all possible phase portraits a linear system (2) can have Now it is time discuss stability y 0 11

12 v 2 y2 x2 v 1 y 1 x 1 Figure 6: Saddle point after the linear transformation (left), and the original phase portraits (right) We have x = P y 74 Stability of the linear system (2) In what follows I will assume that det A 0, ie, this means that there is only one isolated equilibrium of system ẋ = Ax, x R 2, (2) which is the origin: ˆx = (0, 0) To define stability of this equilibrium, and therefore stability of the linear system itself, I need a notion of a neighborhood and distance in the set R 2 I will use the following generalization of the absolute value to vectors x = (x 1, x 2 ) R 2 : x = ( (x 1 ) 2 + (x 2 ) 2) 1/2 Then distance between two vectors x 1 R 2 and x 2 R 2 is simply x 1 x 2 Using this convenient notation, now I will verbatim repeat my definition of stability of equilibria of the scalar autonomous ODE To wit, Definition 12 An equilibrium ˆx is called Lyapunov stable if for any ϵ > 0 there exists a δ(ϵ) > 0 such that for any initial conditions ˆx x 0 < δ, the flow of (2) satisfies for any t > t 0 If, additionally, ˆx x(t; x 0 ) < ϵ ˆx x(t; x 0 ) 0, when t, then ˆx is called asymptotically stable If for any initial condition x 0 the orbit γ(x 0 ) leaves a neighborhood of ˆx, then this point is called unstable 12

13 The analysis of linear systems is simple since we actually have the flow of the system given by x(t; x 0 ) = e At x 0 Case by case analysis from this lecture allows to formulate the following big theorem: Theorem 13 Let det A 0 Then the isolated equilibrium point ˆx = (0, 0) of planar system (2) is asymptotically stable if and only if for the eigenvalues of A it is true that Re λ 1,2 < 0 If Re λ 1,2 = 0 then the origin is Lyapunov stable, but not asymptotically stable (center) If for at least one eigenvalue it is true that Re λ i > 0 then the origin is unstable Therefore we can have asymptotically stable nodes, improper nodes, and foci, Lyapunov stable center, and unstable nodes, improper nodes, foci, and saddle (note that the latter are always unstable) We can summarize all the information in one parametric portrait of linear system (2) For this it is useful to consider the characteristic polynomial of A as P (λ) = λ 2 (a 11 + a 22 )λ + (a 11 a 22 a 12 a 21 ) = λ 2 + tr A λ + det A, where I use notation tr A = a 11 + a 22 to denote the trace of A We have λ 1,2 = tr A ± (tr A) 2 4 det A 2 therefore the condition for the asymptotic stability becomes simply tr A < 0, det A > 0 Using the trace and determinant as new parameters we can actually present possible linear systems as in the following figure deta, deta = (tra)2 4 stable nodes stable foci unstable foci unstable nodes tra saddles saddles Figure 7: The type of the linear system depending on the values of tr A and deta The centers here are situated where det A > 0 and tr A = 0 13

14 75 Bifurcations in the linear systems I came to the main point of this lecture We have four parameters in our linear systems, four entries of matrix A We can consider variations of these parameters, and the structure of the phase portrait of the linear system will be changing For example, if we cross the boundary det A = 0 in the last figure for negative tr A, then the saddle becomes the stable node Is this a bifurcation in the system? Or, when crossing the curve det A = 1 4 (tr A)2 for positive values of tr A the unstable focus turns into the unstable node Is this change enough to call it a bifurcation? Recall that for the first order equations the definition of bifurcation was based on the notion of topological equivalence, which identified equations with the same orbit structure as being topologically equivalent But now we can have much richer orbits structures because we are not confined any longer to the phase line, now we are dealing with the phase plane This is a quite complicated subject, therefore, I will mostly state results, proofs can be found elsewhere First, there is a notion of topological equivalence, which includes the one that we already discussed, as a particular case Definition 14 Two planar linear systems ẋ = Ax and ẋ = Bx are called topologically equivalent if there exists a homeomorphism h: R 2 R 2 of the plane, that is, h is continuous with continuous inverse, that maps the orbits of the first system onto the orbits of the second system preserving the direction of time It can be shown that the notion of topological equivalence is indeed an equivalence relation, ie, it divides all possible planar linear system into distinct non-intersecting classes The following theorem gives the topological classification of the linear planar system It is also convenient to have Definition 15 An equilibrium ˆx of ẋ = Ax is called hyperbolic if Re λ 1,2 0, where λ 1,2 are the eigenvalues of A Matrix A as well as system ẋ = Ax itself are also called hyperbolic in this case Theorem 16 Two linear systems with hyperbolic equilibria are topologically equivalent if and only if the number of eigenvalues with positive real part (and hence the number of eigenvalues with negative real part) is the same for both systems We usually have this large zoo of the equilibria: nodes, saddles, foci, but from topological point of view there are only three non-equivalent classes of hyperbolic equilibria: with two negative eigenvalues (a hyperbolic sink), one negative and one positive (a saddle), and with two positive eigenvalues (a hyperbolic source) Definition 17 A bifurcation is a change of the topological type of the system under parameter variation Now we state the result that asserts that it is impossible to have bifurcations in the linear system when the equilibrium is hyperbolic Proposition 18 Let A be hyperbolic Then there is a neighborhood U of A in R 4 such that for any B U system ẋ = Bx is topologically equivalent to ẋ = Ax 14

15 Proof The eigenvalues as the roots of the characteristic polynomial depend continuously on the entries of A Therefore, for any hyperbolic equilibrium a small enough perturbation of the matrix will lead to the eigenvalues that have the same sign of Re λ 1,2 Which implies, by Theorem 16, that this new system will be topologically equivalent to the original system Introducing another term, I can say that a property about 2 2 matrices is generic if the set of matrices possessing this property is dense and open in R 4 It can be proved that hyperbolicity is a generic property I will not go into these details, but rephrase the previous as follows: almost all matrices 2 2 are hyperbolic This brings an important question: Do we really need to study non-hyperbolic matrices and non-hyperbolic equilibria of linear systems? The point is that, in real applications the parameters of the matrix are the data that we collect in our experiments and observations These data always have some noise in it, we never know it exactly Therefore, only hyperbolic systems seem to be observed in the real life However, this is not the case Very often, systems under investigation may possess certain symmetries (such as conservation of energy) Another situation, which is more important for our course, that all the system we study contain parameters, whose values we do not know It is therefore unavoidable that under a continuous parameter change the system matrix will be non-hyperbolic, and at some point we will need to cross the boundary between topologically non equivalent behaviors Taking into account the Jordan normal forms for 2 2 matrices, we can consider the following three parameter dependent matrices: (a) [ ] 1 0, (b) 0 µ [ ] µ1 1, (c) µ 2 µ 1 [ ] µ 1 1 µ These three matrices becomes non-hyperbolic when µ = 0 or µ 1 = µ 2 = 0 For example in the case (a) when we perturb µ around zero the matrix with tr A < 0 and det A < 0 becomes a matrix with tr A < 0 and det A > 0, ie, a topological sink turns into a saddle (or vice verse) In the third case we have the change from tr A > 0 and det A > 0 to tr A < 0 and det A > 0, ie, a topological sink becomes a topological source In the case (b) the situation is more involved since we have two parameters, and I will not discuss it here To conclude, we say that for cases (a) and (c) it is enough to have one parameter, and a bifurcation of codimension one occurs, whereas for the case (b) we face a codimension two bifurcation 15

### 14 Periodic phenomena in nature and limit cycles

14 Periodic phenomena in nature and limit cycles 14.1 Periodic phenomena in nature As it was discussed while I talked about the Lotka Volterra model, a great deal of natural phenomena show periodic behavior.

### In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

### Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

Types of critical points Def. (a, b) is a critical point of the autonomous system Math 216 Differential Equations Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan November

### 8.1 Bifurcations of Equilibria

1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

### Homogeneous Constant Matrix Systems, Part II

4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues

### APPPHYS217 Tuesday 25 May 2010

APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

### Nonlinear Autonomous Dynamical systems of two dimensions. Part A

Nonlinear Autonomous Dynamical systems of two dimensions Part A Nonlinear Autonomous Dynamical systems of two dimensions x f ( x, y), x(0) x vector field y g( xy, ), y(0) y F ( f, g) 0 0 f, g are continuous

### 4 Second-Order Systems

4 Second-Order Systems Second-order autonomous systems occupy an important place in the study of nonlinear systems because solution trajectories can be represented in the plane. This allows for easy visualization

### First Midterm Exam Name: Practice Problems September 19, x = ax + sin x.

Math 54 Treibergs First Midterm Exam Name: Practice Problems September 9, 24 Consider the family of differential equations for the parameter a: (a Sketch the phase line when a x ax + sin x (b Use the graphs

### NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

### One Dimensional Dynamical Systems

16 CHAPTER 2 One Dimensional Dynamical Systems We begin by analyzing some dynamical systems with one-dimensional phase spaces, and in particular their bifurcations. All equations in this Chapter are scalar

### Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Math 2 Lecture Notes Linear Two-dimensional Systems of Differential Equations Warren Weckesser Department of Mathematics Colgate University February 2005 In these notes, we consider the linear system of

### Math 273 (51) - Final

Name: Id #: Math 273 (5) - Final Autumn Quarter 26 Thursday, December 8, 26-6: to 8: Instructions: Prob. Points Score possible 25 2 25 3 25 TOTAL 75 Read each problem carefully. Write legibly. Show all

### Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

### 1. General Vector Spaces

1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

### Stability of Dynamical systems

Stability of Dynamical systems Stability Isolated equilibria Classification of Isolated Equilibria Attractor and Repeller Almost linear systems Jacobian Matrix Stability Consider an autonomous system u

### Math 110, Spring 2015: Midterm Solutions

Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

### Two Dimensional Linear Systems of ODEs

34 CHAPTER 3 Two Dimensional Linear Sstems of ODEs A first-der, autonomous, homogeneous linear sstem of two ODEs has the fm x t ax + b, t cx + d where a, b, c, d are real constants The matrix fm is 31

### Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

### Tangent spaces, normals and extrema

Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

### Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

### Nonlinear Control Lecture 2:Phase Plane Analysis

Nonlinear Control Lecture 2:Phase Plane Analysis Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 r. Farzaneh Abdollahi Nonlinear Control Lecture 2 1/53

### 1 Systems of Differential Equations

March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

### 6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

### 27. Topological classification of complex linear foliations

27. Topological classification of complex linear foliations 545 H. Find the expression of the corresponding element [Γ ε ] H 1 (L ε, Z) through [Γ 1 ε], [Γ 2 ε], [δ ε ]. Problem 26.24. Prove that for any

### 18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

### 4. Linear transformations as a vector space 17

4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

### a11 a A = : a 21 a 22

Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

### Ordinary and Partial Differential Equations

Ordinary and Partial Differential Equations An Introduction to Dynamical Systems John W. Cain, Ph.D. and Angela M. Reynolds, Ph.D. Mathematics Textbook Series. Editor: Lon Mitchell 1. Book of Proof by

### FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland 4 May 2012 Because the presentation of this material

Faculty of Electrical Engineering, Mathematics & Computer Science Homoclinic saddle to saddle-focus transitions in 4D systems Manu Kalia M.Sc. Thesis July 2017 Assessment committee: Prof. Dr. S. A. van

### 1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

### 2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

### and let s calculate the image of some vectors under the transformation T.

Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

### Eigenvalues and Eigenvectors

LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

### Definition (T -invariant subspace) Example. Example

Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

### Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

### Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

### Linear Algebra Primer

Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

### 4. Linear Systems. 4A. Review of Matrices A-1. Verify that =

4. Linear Systems 4A. Review of Matrices 0 2 0 0 0 4A-. Verify that 0 2 =. 2 3 2 6 0 2 2 0 4A-2. If A = and B =, show that AB BA. 3 2 4A-3. Calculate A 2 2 if A =, and check your answer by showing that

### Solutions Chapter 9. u. (c) u(t) = 1 e t + c 2 e 3 t! c 1 e t 3c 2 e 3 t. (v) (a) u(t) = c 1 e t cos 3t + c 2 e t sin 3t. (b) du

Solutions hapter 9 dode 9 asic Solution Techniques 9 hoose one or more of the following differential equations, and then: (a) Solve the equation directly (b) Write down its phase plane equivalent, and

### Chapter 8. Rigid transformations

Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending

### Differential equations

Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

### 20D - Homework Assignment 5

Brian Bowers TA for Hui Sun MATH D Homework Assignment 5 November 8, 3 D - Homework Assignment 5 First, I present the list of all matrix row operations. We use combinations of these steps to row reduce

### Calculating determinants for larger matrices

Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

### 1 Matrices and Systems of Linear Equations

March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

### Eigenvalues, Eigenvectors, and Diagonalization

Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

### Invariant Manifolds of Dynamical Systems and an application to Space Exploration

Invariant Manifolds of Dynamical Systems and an application to Space Exploration Mateo Wirth January 13, 2014 1 Abstract In this paper we go over the basics of stable and unstable manifolds associated

### Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

### A matrix times a column vector is the linear combination of the columns of the matrix weighted by the entries in the column vector.

18.03 Class 33, Apr 28, 2010 Eigenvalues and eigenvectors 1. Linear algebra 2. Ray solutions 3. Eigenvalues 4. Eigenvectors 5. Initial values [1] Prologue on Linear Algebra. Recall [a b ; c d] [x ; y]

### ANSWERS Final Exam Math 250b, Section 2 (Professor J. M. Cushing), 15 May 2008 PART 1

ANSWERS Final Exam Math 50b, Section (Professor J. M. Cushing), 5 May 008 PART. (0 points) A bacterial population x grows exponentially according to the equation x 0 = rx, where r>0is the per unit rate

### Stability lectures. Stability of Linear Systems. Stability of Linear Systems. Stability of Continuous Systems. EECE 571M/491M, Spring 2008 Lecture 5

EECE 571M/491M, Spring 2008 Lecture 5 Stability of Continuous Systems http://courses.ece.ubc.ca/491m moishi@ece.ubc.ca Dr. Meeko Oishi Electrical and Computer Engineering University of British Columbia,

### An introduction to Birkhoff normal form

An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

### Eigenvectors and Hermitian Operators

7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

### Diagonalization of Matrices

LECTURE 4 Diagonalization of Matrices Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the upper left to the lower right (the main diagonal) Diagonal

### THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

### Summer Session Practice Final Exam

Math 2F Summer Session 25 Practice Final Exam Time Limit: Hours Name (Print): Teaching Assistant This exam contains pages (including this cover page) and 9 problems. Check to see if any pages are missing.

### Math 312 Lecture Notes Linearization

Math 3 Lecture Notes Linearization Warren Weckesser Department of Mathematics Colgate University 3 March 005 These notes discuss linearization, in which a linear system is used to approximate the behavior

### Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

### Math 240 Calculus III

Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### ENGI 9420 Lecture Notes 4 - Stability Analysis Page Stability Analysis for Non-linear Ordinary Differential Equations

ENGI 940 Lecture Notes 4 - Stability Analysis Page 4.01 4. Stability Analysis for Non-linear Ordinary Differential Equations A pair of simultaneous first order homogeneous linear ordinary differential

### A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

### A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

### October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

### Centers of projective vector fields of spatial quasi-homogeneous systems with weight (m, m, n) and degree 2 on the sphere

Electronic Journal of Qualitative Theory of Differential Equations 2016 No. 103 1 26; doi: 10.14232/ejqtde.2016.1.103 http://www.math.u-szeged.hu/ejqtde/ Centers of projective vector fields of spatial

### Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version

### Chapter 9 Global Nonlinear Techniques

Chapter 9 Global Nonlinear Techniques Consider nonlinear dynamical system 0 Nullcline X 0 = F (X) = B @ f 1 (X) f 2 (X). f n (X) x j nullcline = fx : f j (X) = 0g equilibrium solutions = intersection of

### Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version /6/29 2 version

### Solutions to Final Exam Sample Problems, Math 246, Spring 2011

Solutions to Final Exam Sample Problems, Math 246, Spring 2 () Consider the differential equation dy dt = (9 y2 )y 2 (a) Identify its equilibrium (stationary) points and classify their stability (b) Sketch

### MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

### Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

### 1 Lyapunov theory of stability

M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

### 1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

### LS.2 Homogeneous Linear Systems with Constant Coefficients

LS2 Homogeneous Linear Systems with Constant Coefficients Using matrices to solve linear systems The naive way to solve a linear system of ODE s with constant coefficients is by eliminating variables,

### September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

### Physics: spring-mass system, planet motion, pendulum. Biology: ecology problem, neural conduction, epidemics

Applications of nonlinear ODE systems: Physics: spring-mass system, planet motion, pendulum Chemistry: mixing problems, chemical reactions Biology: ecology problem, neural conduction, epidemics Economy:

### Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Lecture Notes: Eigenvalues and Eigenvectors Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Definitions Let A be an n n matrix. If there

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### MATH 320 INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS

MATH 2 INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS W To find a particular solution for a linear inhomogeneous system of differential equations x Ax = ft) or of a mechanical system with external

### IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

### 16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

### Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

### Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

### LIE ALGEBRAS AND LIE BRACKETS OF LIE GROUPS MATRIX GROUPS QIZHEN HE

LIE ALGEBRAS AND LIE BRACKETS OF LIE GROUPS MATRIX GROUPS QIZHEN HE Abstract. The goal of this paper is to study Lie groups, specifically matrix groups. We will begin by introducing two examples: GL n

### Introduction to Dynamical Systems Basic Concepts of Dynamics

Introduction to Dynamical Systems Basic Concepts of Dynamics A dynamical system: Has a notion of state, which contains all the information upon which the dynamical system acts. A simple set of deterministic

### Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

### 1 Invariant subspaces

MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

### Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

### TWELVE LIMIT CYCLES IN A CUBIC ORDER PLANAR SYSTEM WITH Z 2 -SYMMETRY. P. Yu 1,2 and M. Han 1

COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 3, Number 3, September 2004 pp. 515 526 TWELVE LIMIT CYCLES IN A CUBIC ORDER PLANAR SYSTEM WITH Z 2 -SYMMETRY P. Yu 1,2

### Changing coordinates to adapt to a map of constant rank

Introduction to Submanifolds Most manifolds of interest appear as submanifolds of others e.g. of R n. For instance S 2 is a submanifold of R 3. It can be obtained in two ways: 1 as the image of a map into

### EE263: Introduction to Linear Dynamical Systems Review Session 5

EE263: Introduction to Linear Dynamical Systems Review Session 5 Outline eigenvalues and eigenvectors diagonalization matrix exponential EE263 RS5 1 Eigenvalues and eigenvectors we say that λ C is an eigenvalue

### 4. Linear Systems. 4A. Review of Matrices ) , show that AB BA (= A A A).

4A-. Verify that 4A-2. If A = 2 3 2 2 4A-3. Calculate A if A = and A A = I. 4. Linear Systems 4A. Review of Matrices 2 2 and B = = 3 2 6, show that AB BA. 2 2 2, and check your answer by showing that AA

### An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems

An Undergraduate s Guide to the Hartman-Grobman and Poincaré-Bendixon Theorems Scott Zimmerman MATH181HM: Dynamical Systems Spring 2008 1 Introduction The Hartman-Grobman and Poincaré-Bendixon Theorems

### Qualitative Classification of Singular Points

QUALITATIVE THEORY OF DYNAMICAL SYSTEMS 6, 87 167 (2005) ARTICLE NO. 94 Qualitative Classification of Singular Points Qibao Jiang Department of Mathematics and Mechanics, Southeast University, Nanjing

### Math 216 First Midterm 19 October, 2017

Math 6 First Midterm 9 October, 7 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that