ORDINARY DIFFERENTIAL EQUATIONS.. x n. x y := x i ȳ i. That is, we put the complex conjugate on the second one. Of course

Size: px
Start display at page:

Download "ORDINARY DIFFERENTIAL EQUATIONS.. x n. x y := x i ȳ i. That is, we put the complex conjugate on the second one. Of course"

Transcription

1 ORDINARY DIFFERENTIAL EQUATIONS J. DOUGLAS WRIGHT 1. Stuff to know from before and conventions Vectors in C n will usually be denoted using column vectors: x 1 x 2 x =. x n. R + := [, ) and R = (, ]. For vectors x and y, we have the dot product x y := x i ȳ i. That is, we put the complex conjugate on the second one. Of course x := x x I = identity matrix. The elements of an n m matrix A will be denoted by A ij where i is the row and j is the column. I use dots instead of primes for derivatives. That is: ẋ := dx dt. The mean value theorem: If x(t) is differentiable for t (a, b) and continuous on [a, b], then there exists at least one c (a, b) such that ẋ(c) = The Cauchy-Schwarz inequality: x(b) x(a) b a x y x y How to compute the eigenvalues/eigenvectors of a matrix and when and how it can be diagonalized. You should also know how to put it in Jordan canonical form if it isn t diagonalizable. Euler s formula e iθ = cos(θ) + i sin(θ). 1

2 2 J. DOUGLAS WRIGHT The Fundamental Theorem of Calculus. If f is nice : τ d [f(t)]dt = f(τ) f() dt and d t f(s)ds = f(t). dt If z C, then R(z) := real part of z and I(z) := imaginary part of z. Also z = R(z)2 + I(z) 2 = z z. diag (λ 1,..., λ n ) is the n n matrix where A ii = λ i for i = 1,..., n and A ij = if i j. That is, it s got the λs on the diagonal and is zero everywhere else. We say a function f : R C n goes to zero exponentially quickly as t if there exists constants t R, α > and C > so that for all t t. If Σ C n and x C n, then f(t) Ce αt dist (x, C n ) := inf x y. y Σ A Banach space is a complete normed vector space. Recall that a vector space X is normed if you have a map : X R + such that (a) x = if and only if x = (b) x + y x + y for any x, y X and (c) for all α C we have αx = α x. Recall that a normed vector space is complete if all Cauchy sequences converge. Recall that a sequence {u n } n N is Cauchy if, for all ɛ >, there exists N N such that n m N implies u n u m ɛ. Suppose that x R n (or C n ) and: F (x) = F 1 (x) F 2 (x). F m (x) where each F j (x) R (or C). That is to say, F : R n R m (or C n C m ). The derivative of F at x is the n m matrix F 1 F x 1 (x ) 1 F x 2 (x )... 1 x n (x ) F 2 F x DF (x ) := 1 (x ) 2 F x 2 (x )... 2 x n (x ) [ ] := Fi (x ). x j n m F m F x 1 (x ) m F x 2 (x )... m x n (x ) Note that if v R n then d [F (x + tv)] = DF (x )v. dt t= Sometimes DF is called the linearization of of F.

3 The following statement is uncontroversial: ORDINARY DIFFERENTIAL EQUATIONS 3 2. Introduction (KS) The way things change depends on the way they are right now. Of course, the sentence is vague. By things I could mean just about anything. I could be talking about: (1) The state of the stock market. (2) The position and velocity of each member of a large flock of birds. (3) The voltage level in a neuron in a squid s nervous system. (4) The total number of zombies and vampires in an apocalyptic nightmare scenario. (5) The stress and strain in the parts and pieces that make up a bridge. But since I am a mathematician and not an economist, zoologist, neurologist, vampire hunter or civil engineer, what I really mean is: (6) An element 1 (call it x) of R n. To fix notation and ideas, x will be a column vector: x 1 x 2 x = Getting back to our key statement (KS), we could rewrite it mathematically as dx (1) = F (x, t). dt The left hand side here is the usual rate of change of x. Though I do not really care about the interpretation of x, I will usually interpret t as being time. That is to say, the left hand side is the way things change. On the right hand side, F is a map from R n R to R n. Since (x, t) is the way things are right now, F tells us how to convert the current state of things into the rate of change. The equation (1) is usually referred as an ordinary differential equation (or ODE for short). Our task is to find a differentiable function x(t) such that d x(t) = F (x(t), t) dt for each t I where I R is an interval. Such a function is called a solution of (1) on the interval I. Another way to view (1) is that at each time t, F (x, t) defines a vector field on R n. And our goal is to find a curve, given by x(t), in R n which at time t has its tangent vector lining up exactly with F. 1 If I really wanted to be a mathematician, I would say that I mean elements (call them u) of a Banach space. In fact, there will be times when I do this!. x n.

4 4 J. DOUGLAS WRIGHT If we are very fortunate, once we know F we may be able to find explicit formulae for x in terms of nice functions like exponentials, polynomials and trigonometric functions. Such methods for analyzing ODE are a large part of the content of a typical undergraduate differential equations course, and as such will not be our primary focus here. Nevertheless, our next section will focus on a large and important class of equations where we can do exactly this. Exercise 1. Find all solutions of dx dt = ax where a C is a constant and x C. Explain the differences in behavior of solutions when Ra >, Ra < and Ra =. Exercise 2. In (1), the equation only involves dx/dt and not any higher order derivatives. At first glance this seems like a severe restriction of generality. After all, the most famous differential equation of all is Newton s law, F = ma, or rather: d 2 x dt 2 = 1 m F. (Here F is force, x is position and a is acceleration.) How can one include in (1) equations which involve second, third or higher order derivatives? Note that your answer should only involve first derivatives at the end of the day. 3. Linear Autonomous Equations In this section we discuss the case when F (x, t) does not depend on t (i.e. is autonomous) and linear. That is to say, when F (x, t) := Ax where A is an n n matrix. Exercise 3. Suppose that x 1 (t) and x 2 (t) are solutions of dx (2) dt = Ax where A is an n n matrix. Show for all α C that x 1 (t) + αx 2 (t) is also a solution of the equation. This is called the principle of superposition and it is a big deal Existence, uniqueness and more! In undergraduate ODE you spend a lot of time discussing how one can solve linear equations like (2), but in this class we can do it very quickly indeed. For a square matrix M, define exp(m) := e M := k= 1 k! M k.

5 ORDINARY DIFFERENTIAL EQUATIONS 5 Exercise 4. Show that the series given by the right hand side of the definition of exp(m) converges for any matrix M. To do this, you will need to be precise about what you mean about convergence, of course. Exercise 5. Compute exp(tm) when M is: (1) The n n zero matrix. (2) diag [λ 1,..., λ m ], (3) (4) [ 1 1 [ λ 1 λ ]. ], Exercise 6. Fix M to be an n n matrix with complex entries. Show that d exp(tm) = M exp(tm) = exp(tm)m. dt If you are a mathematics graduate student and you do this by passing the derivative through the infinite sum and differentiating term by term, you should justify this step. Additionally, if B is an n m (constant) matrix, use the first part to show that d exp(tm)b = M exp(tm)b = exp(tm)mb. dt Exercise 7. In the following, M, M 1, M 2 are n n matrices with complex entries: (1) Show that M 1 M 2 = M 2 M 1 implies exp(m 1 + M 2 ) = exp(m 1 ) exp(m 2 ). (2) Given an example of matrices M 1 and M 2 with the property that exp(m 1 + M 2 ) exp(m 1 ) exp(m 2 ). (3) [exp(m)] 1 = exp( M). Note that Exercise 6 will help with some of the parts of this problem. Exercise 8. Compute exp(tm) when M is λ λ λ 1... λ (This matrix is all λs on the diagonal and ones on the super-diagonal.) Your answer will depend on the size of the matrix. Exercise 9. Suppose that you have computed exp(a) and exp(b) where A is n n and B is m m. If M is the block diagonal matrix [ ] A M := B what is exp(m)?

6 6 J. DOUGLAS WRIGHT Notice that Exercise 6 has the following Corollary: Corollary 1. If x R n and A is an n n matrix, then solves for all t R. x(t) := exp(ta)x dx dt = Ax Proof. Using Exercise 6 and the property that matrix multiplication is associative, we have: d dt x(t) = d dt exp(ta)x = (A exp(ta)) x = A(exp(tA)x ) = Ax(t). Observe also that, using the solution of Exercise 5 part (1) x() = exp()x = Ix = x. That is to say, the initial value of x(t) is given by x. And so we have the following Lemma: Lemma 2. The function x(t) := exp(ta)x is a solution of the initial value problem (a.k.a. Cauchy problem) (3) dx dt = Ax and x() = x for t R. With this lemma in hand, we have at our fingertips a result which covers about 85% of undergraduate differential equations. Of course, it may or may not be easy to compute exp(ta) for a given matrix A, and moreover just having a formula for something does not mean that you fully understand that thing 2. But, still, it is definitely nice. But there are few mathematical questions about the solutions of (3) given in Lemma 2. The primary one is this: are there any solutions of (3) that are different than exp(ta)x? That is to say, is the solution of (3) unique? The answer to the second question is yes, though the argument does not rely on the matrix exponential. We have the following lemma: Lemma 3. Suppose that x(t) is a solution of: dx (4) = Ax and x() = dt for t R. Then for all t R we have: x(t) =. To prove this we ll need the following result, which is an exercise: 2 This statement is a version of the analyst s creed, which states that estimates are better than equalities. Or rather: >>=

7 ORDINARY DIFFERENTIAL EQUATIONS 7 Exercise 1. Fix A an n n matrix. Show that there exists C A > such that, for all y R n we have Ay C A y. We will also need the following important Lemma, whose proof is postponed. Lemma 4. (Gronwall s inequality) Suppose that, for all t [, T ] we know (5) η C 1 η + f(t). Then for all t [, T ] we have Proof. (Lemma 3) Let η(t) η()e C 1t + e C 1(t s) f(s)ds. µ(t) := 1 2 x(t) 2 = 1 x(t) x(t). 2 Then we differentiate µ with respect to time to get: µ = x ẋ Using (7) then gives: µ = x (Ax). Using the Cauchy-Schwartz inequality on the right hand side gives: µ x Ax. Using Exercise 1 on the right hand side gives Using the definition of µ then gives µ C A x 2. (6) µ(t) C A µ(t). for all t R. Note that if we apply Gronwall s inequality (Lemma 4) to (6), with f(t) =, η = µ and C 1 = C A, then we conclude µ(t) µ()e C At for all t. But of course, µ() = 1 2 x() 2 =. And since µ is manifestly non-negative we have µ(t) = for all t. And if µ(t) is zero, so must be x(t) and we are done. Proof. (Lemma 4, Gronwall s inequality) The proof is modeled on the integrating factor method from undergraduate differential equations 3. If we multiply (5) by e C 1t and rearrange terms we have: η(t)e C 1t C 1 η(t)e C 1t e C 1t f(t). 3 The integrating factor method is hugely awesome, so if you do not know it, learn it now!

8 8 J. DOUGLAS WRIGHT We observe that the left hand side is a perfect time derivative: η(t)e C1t C 1 η(t)e C1t = d ( ) η(t)e C 1 t. Thus we have dt d ( ) η(t)e C 1 t e C1t f(t). dt Integrating both sides from to τ gives, after using the Fundamental Theorem of Calculus: η(τ)e C 1τ η() τ Isolating η(τ) in this inequality yields the conclusion. e C 1t f(t)dt. Exercise 11. The above proof of Lemma 3 only works when A and x are real-valued. Revise it so that it works for complex valued A and x. The difference arises in the step where dµ/dt is computed. Exercise 12. Use Lemma 3 to prove the following: Lemma 5. Suppose that x 1 (t) and x 2 (t) are solutions of: dx (7) dt = Ax and x() = x for t R. Then for all t R we have: x 1 (t) = x 2 (t). There is another useful by-product of the proof of (3), the following Lemma: Lemma 6. Suppose that x 1 (t) and x 2 (t) are solutions of: dx dt = Ax for t R. Then for all t R we have: Here C A is the constant from Exercise 1. Exercise 13. Prove the above lemma. x 1 (t) x 2 (t) e C At x 1 () x 2 (). The above lemma shows that solutions of (2) depend continuously on their initial conditions, a property often referred to as CDOIC (continuous dependence on initial conditions). That is to say, for any fixed time t, we have as x 1 (t) x 2 (t) x 1 () x 2 (). Definition 1. A differential equation (1) which (a) has solutions, (b) whose solutions are uniquely determined by its initial condition and (c) whose solutions depend smoothly on those initial conditions as above is said to be well-posed.

9 And so, Lemmata 2, 5 and 6 together prove: ORDINARY DIFFERENTIAL EQUATIONS 9 Theorem 7. If A is any n n complex valued matrix, the initial value problem is well-posed. dx dt = Ax and x() = x Determining the well-posedness of differential equations is perhaps the most fundamental question in the subject. I hope that the importance of existence of solutions to the differential equation is self-evident. Uniqueness is important because if we wish to use the differential equation to model some phenomena or make predictions, we would like to know that our model only makes one prediction. For instance, if you are trying to launch a rocket and land it on the moon, you d like to know that the process that determines your initial conditions can ONLY result in a successful mission, and not possibly put you in the Sun. Likewise, CDOIC is critical for modeling in the sense that it allows us to handle small errors in our knowledge of the initial conditions. Once again, when sending someone to the moon, you are not going to know the initial conditions exactly. Some errors will be made. You would like to know that these small changes in the initial conditions result in small changes in the solution at later times. We ll spend a fair bit of time talking about well-posedness for general equations, but for now let s talk a bit more about linear equations Qualitative behavior of linear systems. In the last section, we saw that the only solution of the linear autonomous initial value problem (8) is dx dt = Ax and x() = x (9) x(t) = exp(ta)x. This is a pretty awesome explicit formula, but on its own it does not really give us much of an idea of how solutions of the equation behave. It turns out, though, that the formula in conjunction with something called the Jordan canonical form of a matrix we can pretty much get a good qualitative understanding of the behavior of the system. The situation is pretty easy, really, though two things can complicate the analysis: Eigenvalues with zero real part. Non-diagonalizable matrices. So we ll start with the easiest case which is when neither of these things occur Diagonalizable hyperbolic systems. If a matrix A has no eigenvalues with real part equal to zero (that is to say, no eigenvalues on the imaginary axis), then we say that A is a hyperbolic matrix. If an n n matrix A has n linearly independent eigenvectors, then that matrix is diagonalizable. So let us suppose that A is a hyperbolic and diagonalizable n n matrix. Let the eigenvalues of A with negative real part be denoted λ s 1,..., λ s d s and that R(λ s i+1) R(λ s i ).

10 1 J. DOUGLAS WRIGHT That is to say, λ s 1 is the eigenvalue with the least negative real part and λ s d s is the most negative. 4 The superscript s stands for stable. Denote the eigenvectors corresponding to these eigenvalues as v1, s..., vd s s and let E s := span { v s 1,..., v s d s }. Note that E s is a d s dimensional subspace of C n. Likewise, the eigenvalues of A with positive real part will be denoted λ u 1,..., λ u d u Their eigenvectors are and let R(λ u i+1) R(λ u i ). v u 1,..., v u d u E u := span { v u 1,..., v u d u }. Note that E u is a d u dimensional subspace of C n. The u stands for unstable. By the assumption that A is diagonalizable, we have d s + d u = n and any vector x C n can be written uniquely as x = x s + x u where x s E s and x u E u. (That is to say, C n = E s E u.) So here is the big theorem: Theorem 8. Let A be a diagonalizable hyperbolic matrix and define E s and E u as above. The subspaces E u and E s are invariant 5 for solutions of (1) dx dt = Ax and x() = x. That is, if x E s then x(t) E s for all t R. Likewise for E u. Moreover, there exist positive constants C s and C u with the following properties: (1) If x() E s then exp(ta)x C s e Rλs 1 t x for all t. That is, solutions of (1) in E s go to zero as t exponentially quickly. We say, therefore, that E s is the stable subspace of (1). and 4 Note that it is possible that some of these eigenvalues are equal to one another, which could happen if we have repeated eigenvalues. If that is the case, then the repeated eigenvalue is just listed here as many times as its multiplicity. The assumption that the matrix is diagonalizable implies that the geometric multiplicity of each eigenvalue is equal to its algebraic multiplicity. 5 Note that in general, a set Σ C n is invariant for a ẋ = F (x, t) if x() Σ implies x(t) Σ for all t.

11 ORDINARY DIFFERENTIAL EQUATIONS 11 (2) If x() E u then exp(ta)x C u e Rλu 1 t x, for all t. That is, solutions of (1) in E u goes to zero as t exponentially quickly. We say, therefore, that E u is the unstable subspace 6 of (1). Before I prove this theorem, take a look at this figure which basically captures all the ideas in it: Figure 1. Schematic for Theorem 8. Note how the solutions not on the invariant subspaces look like hyperbolae. That s why the matrix A is called hyperbolic. Proof. (Theorem 8, Special Case: A = diag ( λ s 1,..., λ s d s, λ u 1,..., λ u d u ) ) Since the matrix is diagonal, we can take as the eigenvectors the usual unit vectors. That is, the eigenvector for λ s k is e k where e k is zero in all slots except the kth, where it is one. Likewise, the eigenvector for λ u k is e d s+k. We have E s = span {e 1,..., e du } = {v C n : v has zeros in the last d s slots} 6 Typically unstable in a dynamical systems or differential equations context means goes to zero as time goes to negative infinity. That is to say unstable is stable in backwards time.

12 12 J. DOUGLAS WRIGHT and E u = span {e ds+1,..., e n } = {v C n : v has zeros in the first d u slots}. If we use Exercise 5 part (2), we see that: exp(ta) = diag ( e tλs 1,..., e tλ s ds, e tλu 1,..., e tλ u du ). And so if x is in E s, which is to say it is zero in the last d u slots, clearly so is exp(ta)x. Thus we have the invariance of E s. The same sort of reasoning show that E u is invariant. To prove the estimate in (1) of the theorem, we proceed as follows. Take x E s. Then x = α 1. α ds. d s for some complex numbers α 1,..., α ds. I know it is obvious, but notice x 2 = α j 2. Applying exp(ta) to x gives: Then exp(ta)x = α 1 e tλs 1. α ds e tλs ds.. d s exp(ta)x 2 = α j 2 e tλs j 2. For any complex number z, we have e z = e R(z) via Euler s formula, and so: j=1 d s exp(ta)x 2 = α j 2 e 2tRλs j. Now Rλ s j Rλ s 1 < for all j by assumption and since e x is an increasing function, if we take t we have e 2tRλs j e 2tRλ s 1 for j = 1,..., d s. Thus d s d s exp(ta)x 2 α j 2 e 2tRλs 1 = e 2tRλ s 1 α j 2 = e 2tRλs 1 x 2. j=1 j=1 j=1 j=1

13 ORDINARY DIFFERENTIAL EQUATIONS 13 Taking square roots of both sides gives the estimate in (1). Note that, in this case, C s = 1. The proof of the estimates in (2) is basically the same, so why don t you prove it yourself? Exercise 14. For the special case of diagonal matrices A, show the estimate in part (2) of Theorem 8. Now, the proof of Theorem 8 when A is not diagonal but merely diagonalizable is pretty straightforward, but I am going to leave it as an exercise. Exercise 15. Prove Theorem 8 in the general case. Recall that if the matrix A is diagonalizable that there exists a matrix S such that S 1 AS = D where D = diag ( λ s 1,..., λ s d s, λ u 1,..., λ u d u ). You ll want to remind yourself of how to get S from the eigenvalues/eigenvectors of A. You ll also want change variables by letting x = Sy. Once you do that, you can reduce to the case already solved. Exercise 16. Suppose that A is hyperbolic and diagonalizable. Show that the unstable subspace E u for the equation ẋ = Ax is stable in the sense that if x(t) is any solution of this equation, then lim dist (x(t), t Eu ) =. Also, formulate the corresponding result for E s. These results justify the hyperbolic trajectories in Figure 1. Exercise 17. Let A be a diagonalizable hyperbolic matrix and define E s as above. Consider (11) Let dx dt = Ax and x() = x. W s := {x : the solution of (11) goes to zero exponentially quickly as t }. Show that E s = W s Diagonalizable but nonhyperbolic systems. It may well be the case that some of the eigenvalues in A have zero real part, and so the matrix is not hyperbolic. This is not such a huge change in the linear problem. Let us keep the notation used in the previous section where λ s k stands for an eigenvalue with negative real part, λu k is one with a positive real part and so on. Additionally, let λ c 1,..., λ c d c be all the eigenvalues with zero real part and let v c 1,..., v c d c

14 14 J. DOUGLAS WRIGHT be their associated eigenvectors. Let E c := span { v c 1,..., v c d c }. Here, we ll continue to assume that the matrix is diagonlizable, which means that d u + d s + d c = n where n is the size of x. The c stands for center. Here is how Theorem 8 changes in this context: Theorem 9. Let A be a diagonalizable matrix and define E s, E u and E c as above. The subspaces E u, E s and E c are invariant for solutions of dx (12) dt = Ax and x() = x. That is, if x E s then x(t) E s for all t R. Likewise for E u and E c. Moreover, there exist positive constants C s, C u, C c with the following properties: (1) If x() E s then exp(ta)x C s e Rλs 1 t x for all t. That is, solutions of (12) in E s go to zero as t exponentially quickly. We say, therefore, that E s is the stable subspace of (12). (2) If x() E u then exp(ta)x C u e Rλu 1 t x, for all t. That is, solutions of (12) in E u goes to zero as t exponentially quickly. We say, therefore, that E u is the unstable subspace 7 of (12). (3) If x() E c then exp(ta)x C c x for all t R. That is, the solution remains bounded for all t. We say E c is the center subspace of (12). I will not prove this; you will: Exercise 18. Prove Theorem 9. Basically, the strategy of the proof is exactly the same as Theorem 8. Prove it for diagonal matrices, where the proof is hopefully pretty clear. Then reduce to the case already solved by diagonalizing A. Also, sketch a figure akin to Figure 1 for Theorem 9. Is there a result like the one you proved in Exercise 16? Remark 1. Given that e iωt = cos(ωt) + i sin(ωt) your solution to the preceding exercise should hopefully convince you that solutions on E c will be oscillatory. That is to say, they go around on a circle, roughly speaking. Circles have centers, of course, which is why we call E c the center subspace. 7 Typically unstable in a dynamical systems or differential equations context means goes to zero as time goes to negative infinity. That is to say unstable is stable in backwards time.

15 ORDINARY DIFFERENTIAL EQUATIONS 15 Exercise 19. Suppose that we have the initial value problem (13) ẋ = Ax and x() = x where x R n and the entries in A are real-valued. Since everything in the problem is realvalued then it stands to reason that the solution x(t) will be real valued for all t. Nevertheless, the eigenvalues/eigenvectors of A may well be complex-valued, which means that the invariant subspaces E u, E s and E c will be subspaces of C n not R n. That seems a bit weird! Let s resolve this as follows. We can still consider (13) as an initial value problem on C n. Moreover, R n is a subspace of C n. Show that, so long as A is real valued, R n is an invariant subspace for solutions of (13). Then prove that the intersection of two invariant subspaces is itself an invariant subspace. With this, you can reformulate Theorem 9 in the case where A and x are purely real in such a way that the invariant subspaces are themselves subspaces of R n, not C n. Also, the following fact is useful when actually calculating solutions of (13): If ż = Az where A is real valued but z C n, then x = R(z) and y = I(z) solve (13), with appropriate initial conditions. Show this Nondiagonalizable systems. As we saw in the last section, a lack of hyperbolicity in A has dynamic implications for solutions of dx (14) dt = Ax and x() = x. That is, the zero real-part eigenvalues result in the existence of a center subspace. It turns out that if A is not diagonalizable, there really aren t any major dynamic implications, just some minor technical changes (which are mostly just annoying) to Theorem 8 when the system is hyperbolic and some slightly less minor technical changes (which are still annoying) to Theorem 9 when the system is not hyperbolic. Here is what you get in the hyperbolic case: Theorem 1. Let A be an n n hyperbolic matrix, and consider the initial value problem dx (15) dt = Ax and x() = x. Then there are subspaces E s and E u such that C n = E s E u which are each invariant for solutions of (15). That is, if x E s then x(t) E s for all t R. Likewise for E u. These subspaces have the following additional properties. Let Λ s := max {R(λ) : λ is an eigenvalue of A with negative real part} and Λ u := min {R(λ) : λ is an eigenvalue of A with positive real part}. (That is Λ u is the real part of the smallest positive eigenvalue and Λ s is similarly defined for the negative eigenvalues.)

16 16 J. DOUGLAS WRIGHT (1) For all α s < Λ s there exists C s = C s (α s ) such that if x() E s then exp(ta)x C s e αst x for all t. That is, solutions of (15) in E s go to zero as t exponentially quickly. We say, therefore, that E s is the stable subspace of (15). (2) For all α u < Λ u there exists C u = C u (α u ) such that if x() E u then exp(ta)x C u e αut x for all t. That is, solutions of (15) in E u goes to zero as t exponentially quickly. We say, therefore, that E u is the unstable subspace of (1). Like I said, it s annoying. Notice what is different here: first, the theorem does not exactly say what E s and E u are. It is always possible to compute them using something called the Jordan normal form of the matrix A, but it is easier to state the theorem without specifying what exactly E s and E u are. The second difference is that the exponential convergence to zero on the subspaces is at an ever so slightly slower rate than in the hyperbolic case. If α s/u were allowed to be equal to Λ s/u, it would be the same rate. Now, the estimate here is not sharp; the convergence might be at the same rate in some problems, but in general this is what you get. Anyway, this is the appropriate sketch:

17 ORDINARY DIFFERENTIAL EQUATIONS 17 Figure 2. Schematic for Theorem 1. Note that this is exactly the same as Figure 1. To prove the estimates (1) and (2) in Theorem 1 you need the following lemma: Lemma 11. Let λ λ (16) B(λ, m) := λ 1... λ where λ C. The size of this matrix is m m. Let α = R(λ). (1) If α >, then for all α < α there exists a positive constant C = C(α) such that, for all v C n and t : exp(tb)v Ce tα v.

18 18 J. DOUGLAS WRIGHT (2) If α <, then for all α > α, there exists a positive constant C = C(α) such that, for all v C n and t : exp(tb)v Ce tα v. (3) Lastly, if α =, then for all α > there exists a positive constant C = C(α) such that for all v C n and all t R: exp(tb)v Ce t α v. Exercise 2. Prove Lemma 11. You basically use Exercise 8 and the fact that all exponentials grow faster than all polynomials. Are there any cases in which you can take C independent of α? With this, we are in position to prove Theorem 1 in the special case where A is block diagonal of the form: (17) A = diag (B s 1,..., B s l, B u 1,..., B u k ) where as in (16). B s j := B(λ s j, m s j) and B u j := B(λ s u, m u j ) Exercise 21. Show that Theorem 1 is true for block diagonal matrices of the form (17) as described above. In this case, you can explicitly determine E s and E u. What are they? Here is an important theorem: Theorem 12. (Jordan Normal Form) If A is any n n matrix, then there is an invertible n n matrix S such that S 1 AS = diag (B 1 (λ 1, m 1 ),..., B l (λ l, m l )). I will not prove this, but instead refer the interested reader to the internet. In any case, once you have this, you can prove Theorem 1. Do it! Exercise 22. Prove Theorem 1 in general. So all that really remains is state the most general possible theorem about autonomous linear equations. I will state it and leave its proof up to you! Theorem 13. Let A be any n n matrix. Then there are three subspaces E s, E u and E c of C n and positive constants Λ s s and Λ u with the following properties. (1) E s E u = E s E c = E c E u = {}. (2) E s E u E c = C n. (3) All three subspaces are invariant for solutions of dx dt = Ax and x() = x. (4) For all α s < Λ s there exists C s = C s (α s ) such that if x() E s then exp(ta)x C s e αst x for all t. That is, solutions of the initial value problem in E s go to zero as t exponentially quickly. We say, therefore, that E s is the stable subspace.

19 ORDINARY DIFFERENTIAL EQUATIONS 19 (5) For all α u < Λ u there exists C u = C u (α u ) such that if x() E u then exp(ta)x C u e αut x for all t. That is, solutions of the initial value problem in E u goes to zero as t exponentially quickly. We say, therefore, that E u is the unstable subspace. (6) For all α c > there exists C c = C c (α c ) such that if x E c then exp(ta)x C c e αc t x for all t R. That is, (non-zero) solutions grow more slowly than any exponential in forward and backward time. We say, therefore, that E u is the center subspace. Exercise 23. Prove it! 4. Well-posedness for General Systems. In this section we ll discuss what do with systems like ẋ = F (x, t) and x() = x for more general right hand sides than just Ax. Before we can ask what solutions of an equation do, you must make sure the equation is well-posed and so that s what we ll do in this section. Suppose that F : X Y where X and Y are normed linear spaces (like C n or whatever). We say that F is Lipschitz on U C n if there exists C = C(U) > such that, for all u, v U, we have: F (u) F (v) C u v. (Note that the norm on the left of this is in Y and the norm on the right is in X.) Note that if F is Lipschitz then F is continuous. It is also true that if F is differentiable on U and U is closed and bounded, then F is Lipschitz on U. However it is not the case that Lipschitz implies differentiability. Exercise 24. (1) Show that F (x) = Ax, where A is an n n matrix, is Lipschitz on all of C n. (2) Show that f(x) = x is Lipschitz on all of R. (3) Show that f(x) = x 2 is Lipschitz on U where U is a closed and bounded subset of R. (4) Show that f(x) = x 2 is not Lipschitz on R. (5) Show that f(x) = x is not Lipschitz on [, 1]. (6) Show that f(x) = x is Lipschitz on [1, ). (7) Suppose that F : C n C m is continuously differentiable on the set U C n and U is closed and bounded. Show that F is Lipschitz on U. It turns out that what we want out of F (x, t) is that it be Lipschitz in the x slot. Let me state the big result: Theorem 14. (Picard s Theorem) Suppose that F : C n C n is Lipschitz on U where U is an open subset of C n. Then, for any x U, there exist a positive constant T > and a unique differentiable function x : [ T, T ] U

20 2 J. DOUGLAS WRIGHT with the properties that, x() = x and, for all t [ T, T ]: ẋ(t) = F (x(t)). The proof of this theorem relies almost entirely on the following totally awesome theorem: Theorem 15. (Banach s Fixed Point Theorem 8 ) Suppose that p : X X where X is a Banach space. Suppose moreover that there exists a closed set B X such that (18) p(b) B and an α (, 1) such that (19) p(u) p(v) α u v for all u, v B. (That is to say, p is a contraction map on B.) Then there exists a unique u B such that u = p(u ). (That is to say, u is a fixed point of p in B.) Remark 2. Note that this theorem converts the pretty simple looking estimate (24) into the existence of a solution of the equation u = p(u). This is a really important idea. A lot of what we do in differential equations is that we show the existence of various things. Here, we re looking for solutions of a differential equation, but in later sections we ll be looking at other things, such as invariant sets, periodic orbits, stable/unstable manifolds. If you can (a) set up your problem so that the thing you are looking for is a fixed point of a map on a Banach space and (b) show your map is a contraction then you are mostly done. Of course, you have to identify p, X and B before showing (23) and (24), and none of these might be easy! But at least it s a general framework!!! Proof. (The B.F.P.T) Part 1: Existence of the fixed point: Pick u B and, for n N set (2) u n = p(u n 1 ). Now, either u 1 = u or it doesn t. If it does, then u is a fixed point for p in B and so we could jump ahead to the uniqueness part of the proof. If it doesn t, let Now, (23) tells us: Observe that we have, if n 1, K := u 1 u. {u n } n= B. u n+1 u n = p(u n ) p(u n 1 ). Since the whole sequence is in B, we use (24) to see that: That is, for all n 1 we have: 8 or The Contraction Mapping Theorem p(u n ) p(u n 1 ) α u n u n 1. u n+1 u n α u n u n 1.

21 ORDINARY DIFFERENTIAL EQUATIONS 21 If repeat this calculation enough times we get, for all n N (21) u n+1 u n α n u 1 u = Kα n. The next step is to show that {u n } n=1 is a Cauchy sequence. Fix ɛ >. Since α (, 1), there exists N N such that Kα N 1 α ɛ. Take n m N. With this choice a computation shows that we have (22) u n u m ɛ, which is to say, yes, the sequence is Cauchy. Here is the calculation. You know it s good because it starts with a telescoping sum: n u n u m = [u j u j 1 ] (by telescoping) j=m+1 n j=m+1 n j=m+1 n m 1 =Kα m Kα m = Kαm 1 α KαN 1 α ɛ u j u j 1 Kα j 1 (by (21)) j= α j j= α j (by triangle inequality) (by factoring and reindexing) (by adding positive stuff) (by the geometric series and α (, 1)) (since m N and α (, 1)) (by choice of N). And so we have (22). Since the sequence is Cauchy and X is a Banach space (which is to say, it has the property that all Cauchy sequences converge), there exists u X such that lim u n = u. n Since B is closed, u B. The estimate (24) implies that p is continuous on B. Thus if we take lim n on both sides of (2) we get: or (more succinctly) u = lim n u n = lim n p(u n 1 ) = p( lim n u n 1 ) = p(u ), u = p(u ).

22 22 J. DOUGLAS WRIGHT So we have found our fixed point. Part 2: Uniqueness of the fixed point: Suppose that v u but p(v ) = v B. That is to say, v is another fixed point in B. Then we have Using (24) on the right hand side we get u v = p(u ) p(v ). u v = p(u ) p(v ) α u v. Since < α < 1 this implies: u v < u v. Seriously, you expect me to believe that??? Exercise 25. (The B.F.P.T with parameters) Suppose that we have a map p : X U X where X is a Banach space and U is a closed bounded subset of C n. Suppose moreover that there exists a closed set B X and α (, 1) such that, for any x U, we have: (23) p(b, x) B and (24) p(u, x) p(v, x) α u v for all u, v B. The Banach Fixed Point Theorem implies, for each choice x U, we have a point u (x) B such that p(u (x), x) = u (x). Suppose that, for each u B that p(u, x) is continuous with respect to x. That is, for any x U we have: lim x x p(u, x ) p(u, x) =. Show that this implies lim x x u (x ) u (x) = for any x U. That is to say if the contraction map p depends continuously on a parameter x, then the fixed point does as well. Now that we have a proof of Banach s Fixed Point Theorem, let s prove Picard s theorem. Proof. (Picard s theorem) We are trying to show that there are solutions of (25) ẋ = F (x) and x() = x. Integrating both sides of the differential equation with respect to time from zero to t gives: Using the FTOC on the left gives: ẋ(s)ds = x(t) x() = F (x(s))ds. F (x(s))ds

23 ORDINARY DIFFERENTIAL EQUATIONS 23 or rather, using the initial condition and isolating x(t): (26) x(t) = x + F (x(s))ds. So we see that solutions of our differential equation (25) are also solutions of the integral equation (26). Define, for continuous functions u : [ T, T ] C n : P (u)(t) := x + Notice that we can reformulate (26), now, as x = P (x). F (u(s))ds. That is to say, solutions of (26) (and thus of (25)) are fixed points of P. Thank goodness we a way of showing that fixed points exist! What do we have to do to use the Banach Fixed Point Theorem? We have figure out X, B and then show (23) and (24). Notice that that if u is continuous function on [ T, T ], so is F (u) since we have assumed F is Lipschitz. Moreover, the integral of a continuous function is continuous so P (u) is a continuous function too. Thus, if we define we have X := C([ T, T ]; C n ) := {continuous functions from [ T, T ] into C n } P : X X. Remarkably, X is a Banach space. What is the norm on X? It is this: u X := max u(t). t [ T,T ] That is, the norm of the function is just the maximum value its length attains on [ T, T ]. (I will use, here X to mean the norm on X and to be the usual Euclidean norm on C n.) To prove that X is complete is quite challenging and actually very interesting, but I will not do it here. Instead, I refer the interested reader to the internet. Now that we have X and P, we need to find B X so that we have P (B) B. Recall that F is Lipschitz on U C n where U is open and bounded. Take C L > to be the Lipschitz constant. Moreover, since F is Lipschitz on U, it is continuous on U. Since U is bounded, there exists C F such (27) F (x) C F for all x U. Since x U, there exists R 1 > so that Let us consider b(x, R 1 ) := {x C n : x x R 1 } U. B := {u X : u(t) b(x, R 1 ) for all t [ T, T ]}.

24 24 J. DOUGLAS WRIGHT It take some work, but it is not so hard to show tht B is a closed subset in X. I will omit this detail, though it might be worth working out if you are interested. Suppose now that u B. What about P (u)? Fix t T : P (u)(t) x = We use the triangle inequality on the right: P (u)(t) x F (u(s))ds. F (u(s)) ds. Since u B, we have u(s) U for all s (, t) and so we can use (27): P (u)(t) x C F ds = C F t C F T. Now, if you were paying attention, you ll have noticed that I never specified what T was. So here s the deal, I m not going to tell you what it is yet, but I am going to tell you that I have taken it so that: (28) < T R 1 /C F. Because in this case I have, for all t T : P (u)(t) x R 1. which means that P (u) B. Which in turn means P (B) B. How about (24), the contraction estimate? Take u, v B. We have: P (u)(t) P (v)(t) = [F (u(s)) F (v(s))] ds. Taking the Euclidean norm of both sides gives, for all t T : P (u)(t) P (v)(t) = The triangle inequality gives: P (u)(t) P (v)(t) Then we use the fact that F is Lipschitz: P (u)(t) P (v)(t) [F (u(s)) F (v(s))] ds. F (u(s)) F (v(s)) ds. C L u(s) v(s) ds. The integral on the right hand side will only get bigger if we increase the size of the interval: And of course P (u)(t) P (v)(t) C L T u(s) v(s) T u(s) v(s) ds. max s [ T,T ] u(s ) v(s ) = u v X.

25 So we have Since this if for all t T, we have: Now I m going to tell you what T is: This means ORDINARY DIFFERENTIAL EQUATIONS 25 P (u)(t) P (v)(t) 2C L T u v X. P (u) P (v) X 2C L T u v X. T := min { 1, R } 1. 4C L C F P (u) P (v) X 1 2 u v X for all u, v in B. Thus we find that P, X and B satisfy the hypotheses of Banach s Fixed Point theorem and we have the existence of a unique x B such that x(t) = x + F (x(s))ds. Before we say QED there are a few small details to point out. For our fixed point x(t) is in X which consists of functions of continuous functions on [ T, T ], not neccessarily those which are differentiable. If x is to solve ẋ = F (x) it had better be differentiable. As it turns out, x is differentiable. Here is why: x is continuous, and so is F. Thus, F (x(t)) is continuous. The FTOC then tells us that and so we have x + d dt F (x(s))ds = F (x(t)) F (x(s))ds is continuously differentiable on [ T, T ]. Which means x is too, since they are the same. This is sometimes called bootstrapping, for reasons which are not apparent to me. But still, it s what people say. Finally, I want to mention uniqueness. Of course x is the unique fixed point of P in B, but is it unique as a solution of (25)? The answer is yes: any solution of (25) is necessarily a fixed point of P, and so if the fixed point is unique, so is the solution. Exercise 26. Note that Theorem 14 only applies to autonomous differential equations. Reprove the theorem but for equations of the form ẋ = F (x, t) and x() = x. Here is the critical part: what conditions do you need on t-dependence of F? Here s what I claim: Let y = (x, t). If F (y) is Lipschitz with respect to y, then non-autonomous case is a corollary of Theorem 14 by doing something clever. On the other hand, I claim you can get by with a weaker condition: something like F (x, t) is Lipschitz in x for all t and continuous in t for all x. Maybe you can even get by with F not even being continuous in the t slot! MAYBE NOT EVEN BOUNDED!!!!

26 26 J. DOUGLAS WRIGHT Note that conclusion the of Picard s theorem is the existence of a solution of ẋ = F (x) and x() = x for what is, possibly, only very short time. It turns out we can continue this solution either indefinitely or we can completely characterize why it cannot be so continued. We have an important consequence of Picard s Theorem: Theorem 16. (The Maximal Existence Theorem) Suppose that F : C n C n is Lipschitz on U where U is an open and bounded subset of C n. Then, for any x U, there exist T α [, ), T ω = (, ] and a unique differentiable function x : (T α, T ω ) U with the properties that, x() = x and, for all t (T α, T ω ): Moreover we have either (1) T ω =, or (2) lim t T ω x(t) =: x ω / U. We also have: (3) T α =, or (4) lim t T + α x(t) =: x α / U. ẋ(t) = F (x(t)). Proof. Fix x and let τ >. We say that τ is positively accessible for ẋ = F (x) and x() = x if there exists x τ : [, τ] U which solves this initial value problem. Let T + := {τ > : τ is positively accessible}. The Picard Theorem tells us that T + is not empty. In particular, it contains the T > whose existence is asserted in that theorem. Thus the set has a supremum. Let T ω := sup T +. If τ 2 > τ 1 > are both in T + then the uniqueness of solutions guaranteed by the Picard Theorem implies that for all t [, τ 1 ] we have x τ2 (t) = x τ1 (t). For this reason, for all t T + we set x (t) := x t (t); this is the unique solution of the equation for t [, T ω ). If T ω = +, then we have conclusion (1) in Theorem 16. If T ω <, then let x α := lim t T ω x (t). Then either x α / U, x α U or the limit does not exist. If x α / U, then we have (2) in Theorem 16. Thus we must rule out the cases where x α U and x α does not exist. Suppose that x α U. In which case, by the Picard Theorem, there exists T 1 > and a unique function u : [ T 1, T 1 ] U which solves u = F (u) and u() = x α. Then define x : [, T ω + T 1 ] U by x(t) = x (t) for t < T ω and x(t) = u(t T ω ) when t T ω. Then a direct calculation shows that x solves the initial value problem on [, T ω +T 1 ].

27 ORDINARY DIFFERENTIAL EQUATIONS 27 Which means T ω + T 1 T +. That is bogus, given what T ω is, and so x α / U or it does not exist. Now suppose that lim t T ω x (t) does not exist. The astute reader will have observed that U is assumed in this Theorem to be a bounded set. Since F is Lipschitz on U we have, for any x U: F (x) F (x ) C L x x. Since U is bounded, this in turn implies x x can only be so big. This them implies the existence of C F > such that (29) F (x) C F for any x U. Now, take {t n } n N to be any increasing sequence which converges to T ω. Let x n := x (t n ). I claim that {x n } n N is a Cauchy sequence. As in the proof of Picard s Theorem, we have for all t [, T ω ): Which means: Then, using (29) x n x m x (t) = x + x n x m = n t m n t m F (x (s))ds. F (x(s))ds. F (x(s) ds C F t n t m. Since {t n } n N is convergent, it is Cauchy. The estimate above immediately implies that {x n } n N is likewise Cauchy. Which means it converges. Since the times {t n } n N were selected arbitrarily this means every x(t) converges along any subsequence of times which approach T ω. But that implies lim x(t) converges! A contradiction! Of course, the existence of T α is pretty similar and I will leave it out. Exercise 27. I claim that the conclusions of Theorem 16 remain completely unchanged if we omit the condition that U is bounded. Prove it. Note that what I m saying is this: show that if T ω < then x(t) does not go to as t T ω. Exercise 28. Find the explicit solution of dx dt = x2 and x() = x >. In particular, show that there exist < T < such that x(t) as t T. This is what you are thinking: Wait, wait, wait, in the last problem you just said that solutions cannot do what this solution is doing! So what s up with that? I want to conclude this section with the following theorem about the continuous dependence of solutions on their initial conditions.

28 28 J. DOUGLAS WRIGHT Theorem 17. (CDOIC) Suppose that F : C n C n is Lipschitz on U where U is an open subset of C n. Fix x U and take x(t), T α and T ω as in Theorem 16. Let T 1 (T α, ] and T 2 [, T ω ). Let ɛ >. Then there exists δ > so that implies where u(t) is the solution of u x δ max u(t) x(t) ɛ t [T 1,T 2 ] ẋ = F (x) and x() = u. That is to say, solutions of this initial value problem depend smoothly on their initial conditions. This theorem is not so hard to prove if T 1 and T 2 are small. But note that in the Theorem there s nothing to guarantee that. For instance, note that you have to show that u(t) exists on [T 1, T 2 ]! That might be hard. In fact, it is kind of hard and I don t really feel like doing it. So here you go: Exercise 29. Prove Theorem 17. One way relies on Exercise 25. Another way, which I think is easier, is to let := x u then get an equation for. The Lipschitz property of F and Gronwall s inequality can be used to get the result. Anyway, with your proof of Theorem 17 and Theorem 16 we have: Theorem 18. Let U be an open subset of C n and let F : C n C n be Lipschitz on U. Then dx dt = F (x) and x() = x U is well-posed. Exercise 3. Let f(x) = x. Show that ẋ = f(x) with x() = x is not well-posed. It is not enough to observe that the right hand side is not Lipschitz at x =. You actually have to show that one of the three things in the definition of well-posedness is violated. Exercise 31. Let f(x) := { 1 when x 1 when x < Show that ẋ = f(x) with x() = x is not well-posed for x < 1. It is not enough to observe that the right hand side is not continuous at x =. You actually have to show that one of the three things in the definition of well-posedness is violated. 5. Equilibria and dynamics near equilibria All that work in the previous section and if I asked you what do solutions of this crazy equation actually do?, all you could say is if the right hand side is Lipschitz, then it has unique solutions which depend continuously on their initial data. To which I d say thanks for nothing, bozo. So let us discuss what is perhaps the simplest class of solutions: those which are constant.

29 If x(t) = x eq is a constant solution of ORDINARY DIFFERENTIAL EQUATIONS 29 dx (3) dt = F (x) and x() = x U then we call x eq an equilibrium solution for the equation. Some people call such things fixed points. Exercise 32. Show that x eq is an equilibrium of (3) if and only if F (x eq ) =. Exercise 33. Make up a function F from R 3 to R 3 and find all of its equilibria. No fair picking F (x) = Ax + x where A is a matrix and x is a constant. Seriously, do something cool Stability of equilibria. So, now you ve managed to locate some equilibria. These solutions are pretty boring. They literally go nowhere and do nothing. This brings us to the more interesting question of stability of equilibria. That is: if x is close, but not exactly equal to x eq, what does the solution x(t) do? Does it stay near x eq? If so, does it converge to x eq? If it doesn t stay near x eq, where does it go? If it converges to x eq, how fast does it do so? There are several different types of stability. The first called orbital stability 9. An equilibrium is orbitally stable if for all ɛ >, there exists δ > such that x x eq δ implies x(t) x eq ɛ for all t. That is to say, you can guarantee solutions stay as close as you like to x eq by taking the initial data sufficiently close to it. The thing about orbital stability is that solutions need not converge to the equilibrium. Exercise 34. Show that the origin is orbitally stable for solutions of: ẋ = y, ẏ = 4x. Note that this equation is linear. What are the eigenvalues? The next type of stability is called asymptotic stability. An equilibrium is asymptotically stable if it is orbitally stable and there exists ɛ > such that x x eq ɛ implies lim x(t) = x eq. That is, if you start close enough to the equilibrium then you are attracted t to the equilibrium as t goes to infinity. Exercise 35. Show that the origin is asymptotically stable for solutions of: ẋ = x 3. Exercise 36. Suppose I redefine asymptotic stability to mean: an equilibrium is asymptotically stable if there exists ɛ > such that x x eq ɛ implies lim x(t) = x eq. That t is, I omit the requirement of orbital stability. Is this new version of asymptotic stability the same as the one above? Why or why not? The final type of stability is called exponential stability. An equilibrium is exponentially stable if there exists α >, ɛ > and C > 1 such that x x eq ɛ implies x(t) x eq C x() x eq e αt for t. 9 or Lyapunov stability.

30 3 J. DOUGLAS WRIGHT Figure 3. Schematic for orbital stability of an equilibrium. Note that the solution does not necessarily converge to the equilibrium. Exercise 37. What property must a matrix A have so that the origin is exponentially stable for x = Ax? Exercise 38. (Duhamel s Formula) Suppose that j : R Cn is piecewise continuous and A is an n n matrix. Show that Z t x(t) = exp(ta)x + exp ((t s)a) j(s)ds solves x = Ax + j(t) and x() = x.

31 ORDINARY DIFFERENTIAL EQUATIONS 31 Figure 4. Schematic for asymptotic or exponential stability of an equilibrium. This formula is connected to the integrating factor. What this formula says is this: if you can solve x = Ax, then you can solve the same equation with the junk term j added to the right hand side. Exercise 39. Suppose that N : Cn Cn is Lipschitz and additionally we know there exists CN > such that kn (x)k CN kxk2 for all x. Suppose that A is an n n matrix which only has eigenvalues with strictly negative real part. (That is to say E s = Cn.) Show that the origin is exponentially stable for x = Ax + N (x). Note that Duhamel s formula implies that all solutions of this equation satisfy: Z t x(t) = exp(ta)x + exp ((t s)a) N (x(s))ds.

DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS DIFFERENTIAL EQUATIONS Basic Concepts Paul Dawkins Table of Contents Preface... Basic Concepts... 1 Introduction... 1 Definitions... Direction Fields... 8 Final Thoughts...19 007 Paul Dawkins i http://tutorial.math.lamar.edu/terms.aspx

More information

Metric spaces and metrizability

Metric spaces and metrizability 1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Invariant Manifolds of Dynamical Systems and an application to Space Exploration

Invariant Manifolds of Dynamical Systems and an application to Space Exploration Invariant Manifolds of Dynamical Systems and an application to Space Exploration Mateo Wirth January 13, 2014 1 Abstract In this paper we go over the basics of stable and unstable manifolds associated

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

Lecture 4: Constructing the Integers, Rationals and Reals

Lecture 4: Constructing the Integers, Rationals and Reals Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 4: Constructing the Integers, Rationals and Reals Week 5 UCSB 204 The Integers Normally, using the natural numbers, you can easily define

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works.

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works. Analysis I We have been going places in the car of calculus for years, but this analysis course is about how the car actually works. Copier s Message These notes may contain errors. In fact, they almost

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Algebra & Trig Review

Algebra & Trig Review Algebra & Trig Review 1 Algebra & Trig Review This review was originally written for my Calculus I class, but it should be accessible to anyone needing a review in some basic algebra and trig topics. The

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16 60.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 0/3/6 6. Introduction We talked a lot the last lecture about greedy algorithms. While both Prim

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Sequence convergence, the weak T-axioms, and first countability

Sequence convergence, the weak T-axioms, and first countability Sequence convergence, the weak T-axioms, and first countability 1 Motivation Up to now we have been mentioning the notion of sequence convergence without actually defining it. So in this section we will

More information

Homogeneous Constant Matrix Systems, Part II

Homogeneous Constant Matrix Systems, Part II 4 Homogeneous Constant Matrix Systems, Part II Let us now expand our discussions begun in the previous chapter, and consider homogeneous constant matrix systems whose matrices either have complex eigenvalues

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

APPPHYS217 Tuesday 25 May 2010

APPPHYS217 Tuesday 25 May 2010 APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this. Preface Here are my online notes for my Calculus II course that I teach here at Lamar University. Despite the fact that these are my class notes they should be accessible to anyone wanting to learn Calculus

More information

Linearization of Differential Equation Models

Linearization of Differential Equation Models Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking

More information

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0 LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Copyright (c) 2006 Warren Weckesser

Copyright (c) 2006 Warren Weckesser 2.2. PLANAR LINEAR SYSTEMS 3 2.2. Planar Linear Systems We consider the linear system of two first order differential equations or equivalently, = ax + by (2.7) dy = cx + dy [ d x x = A x, where x =, and

More information

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class ODE Homework Due Wed. 9 August 2009; At the beginning of the class. (a) Solve Lẏ + Ry = E sin(ωt) with y(0) = k () L, R, E, ω are positive constants. (b) What is the limit of the solution as ω 0? (c) Is

More information

MATH 2921 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES

MATH 2921 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES MATH 9 VECTOR CALCULUS AND DIFFERENTIAL EQUATIONS LINEAR SYSTEMS NOTES R MARANGELL Contents. Higher Order ODEs and First Order Systems: One and the Same. Some First Examples 5 3. Matrix ODEs 6 4. Diagonalization

More information

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4. Entrance Exam, Differential Equations April, 7 (Solve exactly 6 out of the 8 problems). Consider the following initial value problem: { y + y + y cos(x y) =, y() = y. Find all the values y such that the

More information

CS 124 Math Review Section January 29, 2018

CS 124 Math Review Section January 29, 2018 CS 124 Math Review Section CS 124 is more math intensive than most of the introductory courses in the department. You re going to need to be able to do two things: 1. Perform some clever calculations to

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On Basics of Proofs The Putnam is a proof based exam and will expect you to write proofs in your solutions Similarly, Math 96 will also require you to write proofs in your homework solutions If you ve seen

More information

Resonance and response

Resonance and response Chapter 2 Resonance and response Last updated September 20, 2008 In this section of the course we begin with a very simple system a mass hanging from a spring and see how some remarkable ideas emerge.

More information

MITOCW ocw-18_02-f07-lec02_220k

MITOCW ocw-18_02-f07-lec02_220k MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

More information

Econ Lecture 14. Outline

Econ Lecture 14. Outline Econ 204 2010 Lecture 14 Outline 1. Differential Equations and Solutions 2. Existence and Uniqueness of Solutions 3. Autonomous Differential Equations 4. Complex Exponentials 5. Linear Differential Equations

More information

TheFourierTransformAndItsApplications-Lecture28

TheFourierTransformAndItsApplications-Lecture28 TheFourierTransformAndItsApplications-Lecture28 Instructor (Brad Osgood):All right. Let me remind you of the exam information as I said last time. I also sent out an announcement to the class this morning

More information

A Note on Two Different Types of Matrices and Their Applications

A Note on Two Different Types of Matrices and Their Applications A Note on Two Different Types of Matrices and Their Applications Arjun Krishnan I really enjoyed Prof. Del Vecchio s Linear Systems Theory course and thought I d give something back. So I ve written a

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Module - 3 Lecture - 10 First Order Linear Equations (Refer Slide Time: 00:33) Welcome

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

On linear and non-linear equations. (Sect. 1.6).

On linear and non-linear equations. (Sect. 1.6). On linear and non-linear equations. (Sect. 1.6). Review: Linear differential equations. Non-linear differential equations. The Picard-Lindelöf Theorem. Properties of solutions to non-linear ODE. The Proof

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

Math 52: Course Summary

Math 52: Course Summary Math 52: Course Summary Rich Schwartz September 2, 2009 General Information: Math 52 is a first course in linear algebra. It is a transition between the lower level calculus courses and the upper level

More information

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2 Orthogonal matrices What you need to know already: What orthogonal and orthonormal bases for subspaces are. What you can learn here:

More information

ORDERS OF ELEMENTS IN A GROUP

ORDERS OF ELEMENTS IN A GROUP ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since

More information

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations Math 2 Lecture Notes Linear Two-dimensional Systems of Differential Equations Warren Weckesser Department of Mathematics Colgate University February 2005 In these notes, we consider the linear system of

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Introduction to Algebra: The First Week

Introduction to Algebra: The First Week Introduction to Algebra: The First Week Background: According to the thermostat on the wall, the temperature in the classroom right now is 72 degrees Fahrenheit. I want to write to my friend in Europe,

More information

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Math 108B Professor: Padraic Bartlett Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices Week 6 UCSB 2014 1 Lies Fun fact: I have deceived 1 you somewhat with these last few lectures! Let me

More information

1.4 Techniques of Integration

1.4 Techniques of Integration .4 Techniques of Integration Recall the following strategy for evaluating definite integrals, which arose from the Fundamental Theorem of Calculus (see Section.3). To calculate b a f(x) dx. Find a function

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

The Growth of Functions. A Practical Introduction with as Little Theory as possible

The Growth of Functions. A Practical Introduction with as Little Theory as possible The Growth of Functions A Practical Introduction with as Little Theory as possible Complexity of Algorithms (1) Before we talk about the growth of functions and the concept of order, let s discuss why

More information

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions Econ 4 Differential Equations In this supplement, we use the methods we have developed so far to study differential equations. 1 Existence and Uniqueness of Solutions Definition 1 A differential equation

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras Lecture - 13 Conditional Convergence Now, there are a few things that are remaining in the discussion

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models Contents Mathematical Reasoning 3.1 Mathematical Models........................... 3. Mathematical Proof............................ 4..1 Structure of Proofs........................ 4.. Direct Method..........................

More information

Introduction. So, why did I even bother to write this?

Introduction. So, why did I even bother to write this? Introduction This review was originally written for my Calculus I class, but it should be accessible to anyone needing a review in some basic algebra and trig topics. The review contains the occasional

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

Q: How can quantum computers break ecryption?

Q: How can quantum computers break ecryption? Q: How can quantum computers break ecryption? Posted on February 21, 2011 by The Physicist Physicist: What follows is the famous Shor algorithm, which can break any RSA encryption key. The problem: RSA,

More information

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let f : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

2. FUNCTIONS AND ALGEBRA

2. FUNCTIONS AND ALGEBRA 2. FUNCTIONS AND ALGEBRA You might think of this chapter as an icebreaker. Functions are the primary participants in the game of calculus, so before we play the game we ought to get to know a few functions.

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

The Liapunov Method for Determining Stability (DRAFT)

The Liapunov Method for Determining Stability (DRAFT) 44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level

More information

Lecture 5. 1 Review (Pairwise Independence and Derandomization)

Lecture 5. 1 Review (Pairwise Independence and Derandomization) 6.842 Randomness and Computation September 20, 2017 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Tom Kolokotrones 1 Review (Pairwise Independence and Derandomization) As we discussed last time, we can

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Homogeneous Linear Systems and Their General Solutions

Homogeneous Linear Systems and Their General Solutions 37 Homogeneous Linear Systems and Their General Solutions We are now going to restrict our attention further to the standard first-order systems of differential equations that are linear, with particular

More information

Lecture 1: Period Three Implies Chaos

Lecture 1: Period Three Implies Chaos Math 7h Professor: Padraic Bartlett Lecture 1: Period Three Implies Chaos Week 1 UCSB 2014 (Source materials: Period three implies chaos, by Li and Yorke, and From Intermediate Value Theorem To Chaos,

More information

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1. Lecture 8 Qualitative Behaviour of Solutions to ODEs Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.1 The last few

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2 Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information

Introduction to Vectors

Introduction to Vectors Introduction to Vectors K. Behrend January 31, 008 Abstract An introduction to vectors in R and R 3. Lines and planes in R 3. Linear dependence. 1 Contents Introduction 3 1 Vectors 4 1.1 Plane vectors...............................

More information

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

Chapter 7. Homogeneous equations with constant coefficients

Chapter 7. Homogeneous equations with constant coefficients Chapter 7. Homogeneous equations with constant coefficients It has already been remarked that we can write down a formula for the general solution of any linear second differential equation y + a(t)y +

More information

1 Review of simple harmonic oscillator

1 Review of simple harmonic oscillator MATHEMATICS 7302 (Analytical Dynamics YEAR 2017 2018, TERM 2 HANDOUT #8: COUPLED OSCILLATIONS AND NORMAL MODES 1 Review of simple harmonic oscillator In MATH 1301/1302 you studied the simple harmonic oscillator:

More information

Filters in Analysis and Topology

Filters in Analysis and Topology Filters in Analysis and Topology David MacIver July 1, 2004 Abstract The study of filters is a very natural way to talk about convergence in an arbitrary topological space, and carries over nicely into

More information

MCE693/793: Analysis and Control of Nonlinear Systems

MCE693/793: Analysis and Control of Nonlinear Systems MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

Normed and Banach spaces

Normed and Banach spaces Normed and Banach spaces László Erdős Nov 11, 2006 1 Norms We recall that the norm is a function on a vectorspace V, : V R +, satisfying the following properties x + y x + y cx = c x x = 0 x = 0 We always

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

LAGRANGE MULTIPLIERS

LAGRANGE MULTIPLIERS LAGRANGE MULTIPLIERS MATH 195, SECTION 59 (VIPUL NAIK) Corresponding material in the book: Section 14.8 What students should definitely get: The Lagrange multiplier condition (one constraint, two constraints

More information