University of Sydney MATH3063 LECTURE NOTES Shane Leviton 6-7-206
Contents Introduction:... 6 Order of ODE:... 6 Explicit ODEs... 6 Notation:... 7 Example:... 7 Systems of ODEs... 7 Example 0: fx = 0... 8 Eg 2: fx = c (or fx = f(t)... 8 But what when f(x) is a standard function... 8 Matrix ODEs (lecture 2)... 9 Linear equation of x:... 9 Solveing: FINDING THE EIGENVALUES/EIGENVECTORS!!!... 9 Why do we care about eigenvectors/values?... 0 Matrix exponentials... 2 When does ea exist?... 2 exp (A) first class: full set of linearly independent eigenvectors... 2 Diagonalizable or semi-simple matrix... 3 How big of a restriction is a full set of LI eigenvectors?... 3 Matrix and other term exponentials:... 6 Complex numbers and matricies:... 6 Real normal form of matricies... 8 Repeated eigenvalues... 20 Generalized eigenspace:... 20 Primary decomposition theorym... 20 Theorems:... 2 General algorithm for finding ea... 2 Derivative of eat... 24 Phase Plots... 28 Example:... 28 Phase Plot/Phase line... 30 Remarks:... 3 Constant solution:... 3 Spectrum:... 32 Definition of spectrum:... 32 Spectrally stable definition... 32
Stable/unstable/centre sub spaces:... 33 Primary decomposition theorem:... 33 Generally:... 34 Hyperbolic:... 34 Bounded for forward time: Definition... 35 Linearly stable definition:... 35 Asymptotically linearly stable Definition... 36 Restricted dynamics:... 36 Generally:... 37 The theory:... 37 Classification of 2D linear systems:... 4 Our approach:... 42 Discriminant:... 42 Overall picture:... 62 Linear ODEs with non constant coeffiencts... 64 Principle fundamental solution matric.... 65 Liouville s/ Abel s Formula: (Theorem)... 65 PART 2: METRIC SPACES... 66 Start of thinking:... 66 Key idea:... 66 Formal definition of metric spaces:... 66 Metric (distance function):... 66 Metric space... 66 Topology of metric spaces:... 68 Open ball... 68 Examples:... 68 Open subset of metric space... 69 Closed subset of metric space... 69 Interior:... 69 Closure:... 70 Boundary:... 70 Limit point/accumulation point... 70 Sequences and Completeness... 7 Definition: complete sequence... 72 Contraction Mapping Principle:... 73 A contraction is a map:... 73
Definition: Lipshitz function... 76 Defintion: Locally Liphschitz... 77 Existence and Uniqueness of solutions of ODEs:... 77 Equation *... 77 Solution... 77 Dependence on initial conditions and parameters... 8 Theorem (P-L again)... 82 Theorem : Families of solutions... 82 Maximal intervals of existence... 84 Definition: Maximal interval of existence... 84 PART 3: NON LINEAR PLANAR AUTONOMOUS ODES... 86 Terminology:... 87 Function vector:... 87 Talking about what solutions are:... 89 Eg: pendulum equation... 89 Phase portraits:... 9 Isoclines... 92 Linearisation:... 93 Generally for linearlisation:... 93 Hyperbolic good, non-hyperbolic = bad... 94 More Qualitative results... 99 Theorem: uniqun phase curves... 99 Closed phase curves/periodic solutoins... 99 Hamiltonian Systems:... 04 Lotka-Volterra/ Predator-Prey system... 05 Definition of predator prey:... 05 Analysis of system:... 06 Modifications to equations:... 08 Epidemics:... 09 SIR model... 0 Abstraction, Global existsnece and Reparametrisation... 6 Definition: parameter family... 7 Definition: Orbit/trajectory... 7 Existence of complete flow:... 7 Flows on R... 8 Thereom: types of flows in R... 20
α alpha and ω omega limit sets:... 22 Asymptotic limit point... 22 Non- Linear stability (of critical points)... 29 Lyapunov stable definition:... 29 Asymptotically stable: defininition... 29 Lyapunov functions... 3 LaSalle s Invariance principle:... 33 Hilbert s 6 th problem and Poincare- Bendixon theorem... 34 Questino: hilbert s 6 th problem... 34 Poincare Bendixson theorem Proof:... 36 Proof:... 36 Definitions of sets: disconnected... 36 Proof of Poincare Bendixson theorem:... 39 Bendixsons Negative Criterion... 39 INDEX THEORY (Bonus)... 42 Set up:... 42 Theta:... 42 Index: definition... 42 Theorem: on Index... 44 Poincare Index:... 45 Index at infinity:... 48 Computing the index at infinity... 48 Theorem: Poincare Hopf Index theorem... 50 Part 4 Bifurcation theory... 5 Lets start with an example in D:... 52 Bifurcations:... 53 Bifurcation diagram:... 53 Bifurcation:... 53 Example 2 (in 2D) (Saddle node) bifurcation... 53 Transcritical bifurcation... 56 D example:... 56 Supercritical Pitchfork bifurcation... 58 Example... 58 Logistic growth with harvesting... 60 Hopf Bifurcation... 6 Definition of Hopf:... 6
Some more examples of bifurcations... 65 FIREFLIES AND FLOWS ON A CIRCLE... 67 Non uniform oscillator... 67 Glycolysis... 70 Model: Sel kov 968... 70 Catastrophe:... 80 Eg : fold bifurcation:... 8 Eg 2: swallowtail catastrophe... 85 Hysteresis and van der pol oscillator... 86 Van der poll oscillator example:... 88
MATH3963 NON LINEAR DIFFERENTIAL EQUATIONS WITH APPLICATIONS (biomaths) NOTES - Robert Marangell Robert.marangell@sydney.edu.au Assessment: - Midterm (Thursday April 2) 25% - 2 assignments 0% each (due 4/4 and 30/5) - 55% exam Assignments are typed (eg LaTex) PART : LINEAR SYSTEMS Introduction: - What is an ODE; how to understand them a DE is a function relationship between a function and its derivatives An ODE is when the function has only variable independent variable. Eg; t independant (x(t); y(t) dependant Not just scalar, but also vector valued functions (Systems of ODES) x (t) R d x (t) = (x (t), x 2 (t), x d (t)) x j : R R Order of ODE: - Highest derivative appearing in equation If you can isolate the highest derivative (eg F (t, y (t), dy(t), d(n) y (t) ) = 0 dt dt (n) Explicit ODEs If you can isolate highet derivative: it is an explicit ODE
d (n) y (t) dt (n) (explicit is easier; implicit is very hard) Notation: Use dot = G (t, y (t), dy(t), d(n ) y (t) dt dt (n ) ) Example: d dt = y + 2y y + 3(y ) 2 + y = 0 You can reduce the order of an ODE (to ), in exchange for making it a system of ODEs - Introduce new variables into system y 0 = y y = y 0 = y y 2 = y = y y 0 = y y = y 2 y 2 = 2y 2 y 0 3y 2 y 0 Which is a first order system in 3 variables, equivalent to the original 3 rd order ODE Systems of ODEs - We can play the same game for systems of ODEs y + Ay = 0 (with y R 2 ; A = ( a c ( y a b ) + ( y 2 c d ) (y y ) = 0 2 b d ) ) introduce: y = y 3 y 2 = y 4 y 3 = y = ay by 2 y 4 = y 2 = cy dy 2 y 0 0 0 y ( 2 0 0 0 ) = ( y 3 a b 0 0 y 4 c d 0 0 let x = (y, y 2, y 3, y 4 ) ) ( y y 2 y ) 3 y 4 The system of equations is now: Where x = f (x ) x = Bx
Example 0: f(x) = 0 x = 0 x i = 0 i x = c for x, c R n This is all the solutions to x = 0; and the solutions form an n dimensional vector space Eg 2: f(x) = c (or f (x) = f(t) x = c x (t) = c t + d But what when f(x) is a standard function - If f(x) is linear in x Remember: linear if: f(cx + y) = cf(x) + f(y) Recall any linear map can be represented by a matrix, by choosing a basis x = A(t)x If A(t) = A (independent of t), it is called a constant coefficient system x = Ax Eg: 0 0 0 0 0 A = ( ) 0 0 0 0 0 x 0 0 0 x 2 0 0 ( x ) = ( ) ( 3 0 0 x 4 0 0 0 x = x 2 x 2 = x + x 3 x 3 = x 2 + x 4 x 4 = x 3 Is the system of ODEs which we now have to solve x x 2 x ) 3 x 4
Matrix ODEs (lecture 2) x = f(x) Linear equation of x: x = Ax (x R n ; and A M n (R)) Eg: 0 0 eg: A = 0 0 0 x = Ax; x = (x, x 2, x 3 ) x 0 0 x! ( x 2 ) = ( 0 ) ( x 2 ) x 3 0 0 x 3 x = x 2 ; x 2 = x + x 3 x 3 = x 2 So a solution is a triple vector which solves this equation Solveing: FINDING THE EIGENVALUES/EIGENVECTORS!!! Recall: A value λ C Is an eigenvalue of an n n matrix A if a non zero vector v R n such that v is the corresponding eigenvector Av = λv (A λi)v = 0 v ker (A λi) A λi is not invertible det(a λi) = 0 det (A λi) is a polynomial in λ of degree n called the characteristic polynomial of A Eigenvalues are roots of ρ A (λ) = 0 ρ A (λ) Fundamental theorem of algebra: ρ A (λ) has exactly n (possibly complex) roots counted according to algebraic multiplicity.
Algebraic multiplicity: The algebraic multiplicity of a root r of ρ A (λ) = 0 is the largest k such that ρ A (λ) = (λ r) k q(λ) and q(r) 0 The algebraic multiplicity of an eigenvalue λ of A is its algebraic multiplicity as a root of ρ A (λ) Geometric multiplicity: The geometric multiplicity of an eigenvalue is the maximum number of linearly independent eigenvectors Alg multiplicity and geometric algebraic multiplicity geometric multiplicity Returning to example: x ( x 2 x 3 x! 0 0 ) = ( 0 ) ( x 2 ) 0 0 x 3 0 0 A = ( 0 ) 0 0 λ 0 det(a λi) = λ = λ(λ 2 2) = 0 0 λ λ = 0; ± 2 Eigenvectors are found by finding the kernel of ker (A λi) for an eigenvalue: Giving eigenvectors: λ = 0 v = ( 0 ) λ 2 = 2 v 2 = ( 2) λ 3 = 2 v 3 = ( 2) Why do we care about eigenvectors/values? x = Ax Suppose that v is an eigenvector of A M n (C), with eigenvalues of λ. Consider the vector of functions:
x (t) = c(t)v c is some unkown function c: C C If x solves x = Ax, then As v 0, Which is a differential equation of one variable SO: is a solution vector to c (t)v = Ac(t)v = c(t)av = λc(t)v (as eigenvector) c (t) = λc(t) c(t) = e λt x(t) = e λt v x = Ax So- we have a D subspace of solutions x(t) = c 0 e λt v Notes:. Solutions to x = Ax are of the form x = c 0 e λt v where λ is a constant eigenvalue of A of v is the corresponding eigenvectors are called EIGENSOLUTIONS 2. What we just did has a name ansatz (guess what the solution could be, and see if it is correct) Back to example: x(t) = e 0t ( 0 ) ; y(t) = e 2t ( 2) ; z(t) = e 2 ( 2) So, as linear- any linear combination of these is the solution: So: IS OUR SOLUTION!!!!! X(t) = c e 0t ( 0 ) + c 2 e 2t ( 2) + c 3 e 2 ( 2) X(t) is actually a 3D vector space.
Matrix exponentials A M n n ; what is: e A (exp(a)) For a matrix: DEFINE e A = exp (A) = k! Ak (k=0) = I + A + 2 A2 + 3! A3 + If the entries converge, then: e A IS an n n matrix When does e A exist? exp (A) first class: full set of linearly independent eigenvectors - Hypothesis: Suppose A has a full set of linearly independent eigenvectors - Assume that hypothesis for the rest of today(we ll look at if it s not later) Let v, v n be the eigenvectors; with λ,, λ n eigenvalues Form the following matrix: (whose columes are the eigenvectors) LHS: Lemma: So: P = (v, v 2, v n ) AP = (Av, A v 2, Av n ) = (λ v, λ n v n ) (by multiplication) = P diag(λ,, λ n ) PΛ P AP = Λ (= diag(λ, λ n )) e Λ = I + Λ + 2 Λ2 + = diag(e λ,., e λ n) e P AP (P AP) n = P A n P e A = Pe Λ P (e Λ = diag(e λ,, e λ n)) AND: the eigenvectors of e A = e λ j, where λ j is eigenvalues of A,
Example: 0 0 A = ( 0 ) ; 0 0 x ( 0 ) ; ( 2) ; ( 2) P = ( 0 2 2) 0 0 Λ = ( 0 2 0 0 ) 0 0 2 e A = Pe ΛP 0 0 = ( 0 2 2) ( 0 0 0 = (ugly answer) e 2 0 0 ) ( e 2 0 2 2) Diagonalizable or semi-simple matrix A matrix A for which there exists an invertible matrix P such that P AP = Λ (where Λ is diagonal is called diagonalizable or semisimple This says, if A is semisimple, then e A exists, and we can compute it How big of a restriction is a full set of LI eigenvectors? Defective matricies If A does not have a full set of linearly independent eigenvectors, it is called defective - How big of a restriction is being semisimple? - Are there many semi simple matrices? Proposition: Suppose λ λ 2 are eigenvalues of a matrix A with eigenvectors v and v 2. Then v and v 2 are linearly independent
Proof: If a dependence relation, then k and k 2 scalars such that Applying A: k v + k 2 v 2 = 0 (k, k 2 0) A(k v + k 2 v 2 ) = 0 Ak v + Ak 2 v 2 = 0 k λv + k 2 λ 2 v 2 = 0 (as eigenvectors) take λ k v λ k 2 v 2 = 0 \ (λ 2 λ )k 2 v 2 = 0 k 2 = 0 ( contradiction) Example: A = ( a b c d ) ρ A (λ) = λ 2 (a + d)λ + ad bc = λ 2 τλ + δ (taking τ = a + d; δ = ad bc) n (τ = Trace(A) = a jj (sum of the diagonals)) j= λ = τ ± τ2 4δ are roots 2 λ + λ if τ 2 4δ = 0 τ 2 4δ = 0 δ = τ2 4 Therefore: any matrix which does not lie on this parabola in τ, δ space will be diagonalizable τ 2 4δ = (a d) 2 + 4bc The set of matricies with repeated eigenvalues, is the locus of points with (a d) 2 + 4bc = 0 A 3D hypersurface, in 4D space So what if A is defective? Can we compute e A - Yes Two n n matricies A, B are commutative, if AB = BA
Commutator A commutator we will define as: Proposition If A and B commute, then Proof [A, B] AB BA = 0 (iff AB commute) e A+B = e A + e B - Aside: if e A+B = e A e B, and A and B satisfy symmetric AB = BA e A+B = I + (A + B) + (A + B)2 2 = I + A + B + 2 (A2 + AB + BA + B 2 ) + = I + A + B + 2 A2 + AB + 2 B2 (as commutative) And: e A e B = (I + A + 2 A2 ) (I + B + 2 B2 ) = I + (A + B) + 2 A2 + AB + 2 B2 + Example: If x = a b A = ( a 0 ) (a b); B = (0 0 b 0 0 ) e A = ( ea 0 0 e b) ; eb = ( 0 ) e A+B = ( ea e a e b a b ) 0 e b e A e B = e A+B (a b)e a = e a e b x = e x One example of a solution: is x 6.6022 736.693 i Nilpotent matrix: Definition: A matrix N is nilpotent if some power of it is = 0 (N T = 0 for some T) Eg; N = 0 0 0 0 N 3 = 0 (calculate) 0 0 0 This means that the power series terminates for e N :
So- how do we compute the exponential? e N = I + N + 2 N2 + 0 + 0 + Jordan Canonical Form: Let A be an n n matrix, with complex entries a basis of C n and an invertible matrix B (of basis vectors) such that B AB = S + N Where S is diagonal (semisimple); and N is nilpotent; and S and N commute (SN NS = 0) So; this means that e A exists for all matricies: B AB = S + N B e A B = e S e N This is cool. Matrix and other term exponentials: e A = Be S e N B So now: consider the follow At; where A M n (R) and t R So: e At = ( tk k! ) Ak For a fixed A; the exponential is a function map between real numbers and matricies e At is invertible! Inverse is e At k=0 e At : R M n (R) exp A (t) : t e At exp A (t + s) = e A(t+s) = e At e As = exp A (t) exp A (s) So- what is the derivative between the map of the real numbers to the matricies? Complex numbers and matricies: Let J = ( 0 0 ) J 2 = I; J 2 = J; J 4 = I e Jt = I + Jt + 2 J2 t 2 +
= I + Jt 2 t2 I 3! t3 J Which is the rotation matrix; = I t2 t4 t3 I + + + J (t 2 4! 3! + ) = cos t I + sin t J cos t sin t = ( sin t cos t ) So: the matrix J is the complex number i; and we wrote Euler s formula e it = cos t + i sin t - We can actually take any complex number z = x + iy; and find its corresponding matrix M z xi + yj = ( x y y x ) (the technical term is a field isomorphism) Complex number algebra: M z +z 2 = M z + M z2 M z z 2 = M z M z2 = M z2 M z = M z2 z Complex exponentials: M e z = e Mt z = x + iy: e z = e x cos y + ie x sin y = e x cos y sin y ( sin y cos y ) = e x I + e y J Complex number things: det(m z ) = x 2 + y 2 = z 2 M z = ( x y y x ) = M z T M r = ri (if r R) M z M Z = M z 2 = z 2 I if z 0; M z is invertible; and M z = z 2 M z T = det(m z ) M Z T So lets compute e Jt cos t sin t = ( sin t cos t ): Eigenvalues of Jt are ±it e vectors are ( ±i )
So we got what we started with! (which is good) P = ( i i ) ; P = 2i ( i i ) Λ = ( it 0 0 it ) e Jt = 2i (i i ) (eit 0 i 0 e it) ( i ) cos t sin t = ( sin t cos t ) Real normal form of matricies Moving towards a real normal form for matricesl with real entries but complex eigenvalues Fact: if A M n (R) then if λ is an eigenvalue; so λ is an e-value. (complex conjugate theorem). Alsoeigenvectors come in conjugate pairs as well Av = λv Av = λ v - If λ is an eigenvalue λ = a + ib; then λ = a ib is also an e-value - If v = u + iw ; (u, w R n ) v = u iw So: Similarly; So; if P is the n 2 matrix 2u = v + v A(2u) = λv + λv = 2(au bw ) Au = (au bw ) Aw = (au + bw ) P = (u w) AP = P ( a b b a ) (we are close to finding the algorithm for computing the matrix exponential of any matrix) Eg: Characteristic polynomial is: A = 0 0 0 0 0 0 0 0 0 0 x 4 + 3x 2 + e-values of A are ±iφ and ± i φ + 5 (with φ = the golden ratio) 2
so eigenvectors: i 0 φ φ v = ( ) = ( ) + i ( iφ 0 iφ ; v 2 = ( ) i φ 0 φ ) = u + iw i So AP = P ( 0 φ φ 0 ) 0 φ 0 0 φ 0 0 0 P = (u w u 2 w 2 )P AP = 0 0 0 φ 0 0 0 ( φ ) this is called a block diagonal matrix (has blocks of diagonal matricies; and 0s everywhere else) = φj φ J Direct sum: A and B don t need to be the same size A B = ( A 0 0 B ) Lemma: if A = B C Then e A = e B e C Proof: (A = ( B 0 0 0 ) + (0 0 )) which commute 0 C e A = e (B 0 0 0 ) e (0 0 0 C ) = ( eb 0 0 I ) (I 0 0 e C) = (eb 0 0 e C) = eb e C
Repeated eigenvalues So what do we do if we have repeated eigenvalues? How do we calculate the matrix exponential? Generalized eigenspace: Definition Suppose λ j is an eigenvalue of a matrix A with algebraic multiplicity n j. We define the generalized eigenspace of λ j as: E j ker[(a λ j I) nj ] There are basically two reasons why this definition is important. The first is that these generalized eigenspaces are INVARIANT UNDER MULTIPLICATION BY A (does not change). That is: if v E j, then Av is as well Proposition: of generalized eigenspace Suppose that E j is a generalized eigenspace of the matrix A, then if v E j ; so is Av Proof of proposition: - If v E j for some generalized eigenspace, for some eigenvector λ; then v ker((a λi) k ) say for some k. Then we have that (A λi) k v = (A λi) k (A λi + λi)v = (A λi) k+ + (A λi) k λv = 0 The second reason we care about this is that it will give us our set of linearly independent generalized eigenvectors; which we can use to find e A Primary decomposition theorym Let T: V V be a linear transformation of an n dimensional vector space over the complex numbers. If λ, λ k are the distinct eigenvalues (not necessarily = n ) ; if E j is the eigenspace for λ j, then dim(e j ) = the algebraic multiplicity of λ j and the generalized eigenvectors span V. In terms of vector spaces, we have: V = E E 2 E k Combining this with the previous proposition: T will decompose the vector space into the direct sum of the invariant subspaces. The next thing to do is explicitly determine the ssemisimple nilpotent decomposition. Before: we had the diagonal matrix Λ which was - Block diagonal - In normal form for complex eigenvalues
- Diagonal for real eigenvalues Suppose we have a real matrix A with n eigenvalues λ, λ n with repeated multiplicity, suppose we break them up into complex ones and real ones, so we have λ, λ,, λ k, λ k complex and λ 2k+, λ n real. Let λ j = a j + ib + j then: Λ ( a b b a ) ( a k b k b k a k ) diag(λ 2k+, λ n ) Let v, v, v 2k+,, v n be a basis of R n of generalsed eigenvectors, then for complex ones: write v j = u j + iw j Let P (u w u 2 w 2 u k w k v 2k+ v n ) Returning to our construction, the matrix P is invertiable (because of the primary decomposition theorem); so we can write the new matrix as And a matrix We have the following: Theorems: Let A, N S Λ and P be the matricies defined: then S = PΛP N = A S. A = S + N 2. S is semisimple 3. S, N and A commute 4. N is nilpotent Proofs of theorems:. From definition of N 2. From construction of S 3. [S, A] = 0 suppose that v is a generalized eigenvector of A associated to eigenvalue λ. By construction: v is a genuine eigenvector of S. Note that eigenspace is invariant, [S, A]v = SAv ASv = λ(av Av) = 0 so N and S commute. A and N commute from the definition of N and that A commutes with S 4. suppose maximum algebraic multiplicity of an eigenvalue of A is m. Then for any v E j a generalized eigenspace corresponding to the eigenvalue λ j. We have N m v = (A S) m v = (A S) m v(a λ j )v = (A λ j ) m v = 0 General algorithm for finding e A. find eigenvalues
2. find generalized eigenspace for each eigenvalue.; eigenvalues Λ 3. Turn these vectors into P 4. S = PΛP 5. N = A S 6. e At = e St e Nt = Pe ΛP e (A S)t Example: Eigenvalues: ( 4 3 ) λ det (( 4 3 λ )) = 0 λ =, Λ = diag(,) = I Generalized eigenspace: ker((a λi) n j) = ker((a I) 2 ) = ker ( 2 4 2 )2 = ker (( 0 0 )) = R2 0 0 So our vectors must span R n ; choose ( 0 ) ; (0 ) P = ( 0 0 ) = I S = I II = I N = A S = A I = ( 2 4 2 ) So then: e A = e ST e Nt
Last time: Algorithm for computing e A and e At Eg: let Evals ar λ = 2,2,, 0 0 2 2 2 3 4 A = ( ) 2 3 2 2 4 Generalized eigenspace for λ = 2 E 2 = ker((a 2I) 2 ) E 2 is spanned by: E = ker((a I) 2 ) spanned by 2 2 0 ( ) ; ( ) 0 0 0 0 ( ) ; ( ) 0 0 0 Λ = diag(2,2,,); S = PΛP 2 2 0 0 P = ( ) 0 0 0 0 0 0 P = S = PΛP N = AS Note: if not using a computer: Check that N is nilpotent (N a = 0 for some a at most the size of the matrix, ) check NS SN = 0 (that they commute) At = (S + N)t = St + Nt e At = e St e Nt = Pe Λt P e Nt = Pe Λt P ( + Nt)
e At : R GL(n, R) = (Space of invertible n n matricies with real entries) Derivative of e At Proposition: what is d dt (eat ) = Ae At = e At A (Ae At = e At A from that e At exists and = power series representation) Proof : d dt (eat ) = d dt (I + ta + 2 t2 A 2 + t3 3! A3 + ) = A + ta 2 + t2 2 A3 + t3 3! A4 + = A ( + t + t2 2 + t3 3! + ) (we will show it converges uniformly, and so we can differentiate it term by term) = Ae At Proof 2: d dt (eat ) = lim ( ea(t+h) e At ) h 0 h = lim ( eat (e Ah I) ) h 0 h (Ah + A2 h 2 = e At 2 + ) lim ( ) h 0 h = e At lim h 0 (A + A2 2 h + ) = e At A This proposition implies that e At is a solution to the matrix differential equation: Φ (t) = AΦ(t) (note: A is constant coefficient) A cannot be functions of t, as the functions may not commute with the integral Comparing this to the x differential equation: y = ay y = Ce at