A brief introduction to ordinary differential equations

Size: px
Start display at page:

Download "A brief introduction to ordinary differential equations"

Transcription

1 Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s) with respect to this variable. A general ode is a relation of the form Φ(t, y(t), y (t), y (t),... ) =, supplemented with the specification of y and/or its derivatives at certain point (e.g., y(5) = 17 and y (1) = 3). The order of an ode is the order of the highest derivative that appears in the equation. A first-order equation The following equation is an explicit 1st-order ode: y = f(t, y), (1.1) where f : R R R is a given function. function, y(t), that satisfies the relation, A solution of this equation is a y (t) = f(t, y(t)). The solution to this equation is in general not unique; a unique solution exists if in addition to the ode we fix an initial condition, y(t ) = y (provided that f satisfies certain conditions). 1

2 2 Chapter 1 Comments: (1) As a matter of convenience, we will view odes as evolution equations, hence the dependent variable t will be called time. (2) A general first order equation is of the form f(t, y, y ) =. (3) The conditions under which a solution exists and is unique will be stated later on. Example 1.1 The unique solution of the equation y = 5 y y() = 2, is y(t) = 2 e 5t. A second-order equation The following equation is an explicit 2nd-order ode: y = f(t, y, y ). (1.2) Here we need to provide two additional pieces of data in order to guarantee a unique solution. If the two conditions are at the same point t, i.e., y(t ) = y and y (t ) = y, then the problem is called an initial-value problem. Otherwise, if the two conditions are prescribed at different times, it is called a boundary-value problem. Representation as a first-order system An explicit second-order ode can be represented as a system of two first-order equations by defining y 1 (t) = y(t) and y 2 (t) = y (t). Then, y 1 = y 2 y 2 = f(t, y 1, y 2 ) y 1 (t ) = y y 2 (t ) = y. (1.3) Similarly, any explicit n th-order ode can be represented as a system of n first-order equations. For systems of ode it is convenient to adopt a vector notation. Take for example the system (1.3); let ( ) ( ) ( ) y1 (t) y y Y (t) = Y y 2 (t) = and F (t, Y ) = 2. f(t, y 1, y 2 ) Then, (1.3) can be written in vector form: y Y = F (t, Y ), Y () = Y. Exercise 1.1 Represent the fourth-order equations as a system of first-order equations. y (4) = sin (y + t) + y

3 A brief introduction to ordinary differential equations y t Figure 1.1: Vector field and 5 different solutions for the ode y = 1 3t + y + t 2 + ty. Vector fields Consider the scalar equation y = f(t, y), leaving the initial condition y(t ) momentarily unspecified. The solution y(t) that passes, in the (t, y) plane, through the point (t, y ) has at this point a slope f(t, y ), i.e., at every point (t, y) we can draw a line tangent to the solution y(t), that passes though that point. The mapping (t, y) tangent line is called a vector field. A visualization of the vector field that corresponds to the ode y = 1 3t + t + t 2 + ty for (t, y) [ 1, 2] [ 2, 1] is shown in Figure 1.1. A solution y(t) is a curve tangent to the vector field at all points. Exercise 1.2 Write a computer program that draws the vector field for the ode y = y + t for (t, y) [, 2] [ 1, 2]. Plot on top of it the solution y(t) = y e t + t 1 + e t for four different values of y = y() [ 1, 1]. 1.2 Exactly solvable equations An equation is called exactly solvable if we can express its solution in terms of elementary functions. In this section we consider a few examples of exactly solvable equations. Most of the equations in real life applications cannot be solved by analytical means, which is why we need to develop computational algorithms to approximate solutions.

4 4 Chapter Separable equations Consider a first-order equation of the form: y (t) = f(t) g(y(t)). Then, t y (s) g(y(s)) ds = t f(s) ds. On the left hand side we change variables: z = y(s), dz = y (s) ds, thus y(t) y() dz g(z) = t f(s) ds. Example 1.2 Solve the equation y = 6 tan y. We first have t y (s) tan y(s) ds = t 6 ds = 6t. Changing variables, z = y(s), y(t) y() dz = log sin z y(t) tan z y() = 6t, and y(t) = sin 1 exp [log sin y() + 6t]. Exercise 1.3 Solve the equation y = y 3/2 y() = 1. A first-order linear equation As a special case of separable equations, consider the first-order linear equation: y = f(t) y. The solution is ( t ) y(t) = C exp f(s) ds.

5 A brief introduction to ordinary differential equations 5 Inhomogeneous linear equation Consider now the inhomogeneous linear equation: y = f(t) y + g(t). (1.4) We solve it by a method called the variation of constants, seeking a solution of the form ( t ) y(t) = c(t) exp f(s) ds. Substituting this ansatz into the differential equation (1.4), we find that c(t) satisfies ( t ) c (t) = g(t) exp f(s) ds. Thus, solving a differential equation has been reduced to standard integration. Example 1.3 Find the general solution to the equation y = ty + 5. Solution: y(t) = [ 5 t ] e 1 2 s2 ds + C e 1 2 t Total differential equation Consider a first-order equation of the form where P and Q satisfy the relation P (t, y) + Q(t, y) y =, P y = Q t. Then there exists a function F (t, y) such that P = F t and Q = F y. Consider now the function F (t, y(t)). Its time derivative is given by hence df dt = P + Q y =, F (t, y(t)) = C is the (implicit) solution to the equation.

6 6 Chapter 1 Example 1.4 Solve the equation 2t sin y + t 2 cos y y =. }{{}}{{} P Q Solution: t 2 sin y = C. The constant C is determined by the initial conditions. Exercise 1.4 Solve the total differential equation 3t 2 2at + ay 3y 2 y + aty = Second-order equations With the exception of linear equations with constant coefficients and a number of classical cases, second-order equations are usually not exactly solvable. Comment: Euler collected all the equations whose analytical solution were known; over 8 pages. Liouville in 1841 was the first who proved that there exist equations, like y = t 2 + y 2, whose solutions cannot be expressed in terms of elementary functions Linear equations A linear equation of order n is of the form a (t)y (n) + a 1 (t)y (n 1) + + a n (t)y =. (1.5) It can be proved that, under certain regularity conditions on the a j (t), such an equation always has n solutions that are linearly independent. A set of solutions y j (t), j = 1, 2,..., n is called independent if there do not exist nontrivial coefficients c j for which n c j y j (t). j=1 If there exist n functions, (u 1 (t),..., u n (t)), that are all solutions to (1.5), then any linear combination of them is also a solution. Which linear combination to choose is determined by the initial conditions.

7 A brief introduction to ordinary differential equations Linear equations with constant coefficients We consider now a particular case of (1.5) where all the functions a i (t) are constant: y (n) + a 1 y (n 1) + + a n y =. (1.6) We are looking for a basis of n independent solutions. Trying a solution of the form y = exp(p t) one obtains an algebraic equation for p, p n + a 1 p n a n =. If all the roots p i are distinct, then we have found the basis of functions, and the general solution is of the form: y(t) = c 1 e p1t + + c n e pnt. If one of the roots is complex, p = α+i β, then its complex conjugate, α i β, is also a root. From the two one can create two real valued solutions: y 1 (t) = e αt sin(βt) and y 2 (t) = e αt cos(βt). Multiple roots Consider for example the following second-order equation: y 2q y + q 2 y =. If we search for a solution of the form y(t) = e pt we find that p = q is a double root: p 2 2qp + q 2 = = p 1,2 = q, so that y(t) = e qt is a solution, but we still do not have a complete basis. When q is a double root, we look for solutions of the form y(t) = c(t) e qt, from which we find hence c (t) =, y(t) = [a + b t] e qt. More generally, if p is a root of multiplicity k, then forms a k-dimensional basis. y(t) = [ a + a 1 t + + a k 1 t k 1] e pt

8 8 Chapter Euler s polygon approximation Consider a first-order scalar equation, { y = f(t, y) y(t ) = y. The goal is to determine the solution, y(t), at some point T > t (we could equally well consider T < t ). Euler s approximation scheme (1768) goes as follows: partition the interval [t, T ] into sub-intervals with breakpoints, or nodes t < t 1 < < t n = T. Then in each interval (t i, t i+1 ] approximate the solution by the first term of the Taylor series, namely, y 1 y = f(t, y )(t 1 t ) y 2 y 1 = f(t 1, y 1 )(t 2 t 1 ). y n y n 1 = f(t n 1, y n 1 )(t n t n 1 ) Thus we generate a polygon that approximates y(t). In each sub-interval, y(t) is approximated by a linear function which is tangent to the vector field at the left endpoint. The question is how accurately does y n approximate y(t ), and whether the accuracy improves as we refine the partition. This method for approximating solutions to odes is the most elementary one, and is known as the forward-euler method. y t t 1 T y n 1.4 Cauchy s existence and uniqueness theorem In 1824 Cauchy proved that Euler s polygon method converges under quite general conditions, as the discretization size tends to zero. We follow his proof step-by-step.cauchy s proof proves, on the one hand, that we study a sensible (well-posed) problem, which has a unique solution. Moreover, it proves that the forward Euler scheme provides a systematic way to approximate this solution to any required accuracy (at least in principle).

9 A brief introduction to ordinary differential equations 9 Lemma 1.1 Suppose that f(t, y) A on a rectangle D {(t, y) : t [t, T ], y y b}. Then, for all t t b/a: ➀ Euler s polygon solution, y h (t), remains in D for any partition of [t, T ]. ➁ The distance between y h (t) and y is bounded by y h (t) y A t t. ➂ If, in addition, f(t, y) f(t, y ) M for all (t, y) D, then y h (t) y (t t )f(t, y ) M t t. y y + A(t t ) y + f(t, y )(t t ) y h (t) t Proof : T y A(t t ) ➀ We start by proving that the solutions remains in D. Since y h (t) is piecewise linear it can be represented as t y h (t) = y + y h(s) ds. t Thus, as long as the solution remains in D we have y h (t) y t t max f(τ, z) ds A(t t ). τ,z D Suppose that the Euler polygon exits D for the first time at the point t < min(t +b/a, T ), i.e., y h (t) y = b, then b A(t t ), or t > t +b/a, which is a contradiction. ➁ Having established that the Euler polygon remains in D, we have y h (t) y A(t t ).

10 1 Chapter 1 More specifically, assume that t (t i, t i+1 ]. Then, y h (t) is obtained as follows: y h (t) = y i + f(t i, y i )(t t i ) y i = y i 1 + f(t i 1, y i 1 )(t i t i 1 ). y 1 = y + f(t, y )(t 1 t ). Combining together all these equations, y h (t) y = f(t, y )(t 1 t ) + + f(t i, y i )(t t i ). (1.7) Taking absolute values and using (i) the triangle inequality, and (ii) the bound A of f, we obtain y h (t) y A(t 1 t + t 2 t t t i ) = A(t t ). ➂ From (1.7) we have, y h (t) = y + f(t, y )(t t ) + [f(t 1, y 1 ) f(t, y )] (t 2 t 1 )+ + + [f(t i, y i ) f(t, y )] (t t i ). Taking absolute values and using this time the bound M on the variation of f, we find y h (t) y f(t, y )(t t ) M(t t ). Lemma 1.2 Given a partition of the interval [t, T ], let y h (t) and z h (t) be Euler polygons that correspond to two different initial conditions, y and z at time t. If the function f(t, y) satisfies the Lipschitz condition, f(t, z) f(t, y) L z y, for some constant L and for all y and z in a convex region that contains the polygons y h (t) and z h (t), then, z h (t) y h (t) e L(t t) z y. (1.8) Comment: Cauchy originally assumed a bound on a weaker restriction. f y ; Lipschitz continuity is

11 A brief introduction to ordinary differential equations 11 Proof : Consider the two polygons at the point t 1 : z 1 = z + f(t, z )(t 1 t ) y 1 = y + f(t, y )(t 1 t ). Subtract and use the triangle inequality: z 1 y 1 z y + (t 1 t ) f(t, z ) f(t, y ) [1 + L(t 1 t )] z y e L(t1 t) z y, where we have used the inequality: 1 + x e x, x. Similarly, z 2 y 2 e L(t2 t1) z 1 y 1 z 3 y 3 e L(t3 t2) z 2 y 2. z h (t) y h (t) e L(t ti) z i y i, where as before we assume t (t i, t i+1 ]. Combining all the inequalities, we find: z h (t) y h (t) e L(t t) z y. Equipped with these two lemmas we can prove Cauchy s convergence theorem (1824): Theorem 1.1 (Cauchy) Let f(t, y) be continuous in time and Lipschitz in y with constant L. Let f A on the rectangle Then: D = {(t, y) : t [t, T ], y y < b, T t < b/a}. ➀ When h max i (t i+1 t i ), Euler s polygons converge uniformly to a continuous function, φ(t). ➁ φ(t) is continuously differentiable, and satisfies the differential equation φ (t) = f(t, φ(t)), with initial condition φ(t ) = y. ➂ There is no other solution to the differential equation on y y(t ) = y, on [t, T ]. = f(t, y), Proof :

12 12 Chapter 1 ➀ The proof resembles the convergence proof of Riemann sums. Let ɛ > ; since f is continuous on a compact set, it is uniformly continuous, implying that there exists a δ >, such that t m t n δ and y m y n A δ f(t m, y m ) f(t n, y n ) ɛ. Take an initial discretization of [t, T ], with h < δ. The corresponding polygon solution is denoted by y () (t). Consider now a refinement of this discretization that consists of adding a finite number of new points, (s 1,..., s k ), between t and t 1 ; the corresponding polygon is denoted by y (1) (t). y y () (t 1 ) y (1) (t 1 ) t t 1 t We now compare the two polygons at the first node, t 1 : y () (t 1 ) = y + f(t, y )(t 1 t ) y (1) (t 1 ) = y + f(t, y )(s 1 t ) + + f(s, y (1) (s k ))(t 1 s k ) Since for all j = 1,..., k we have s j t < δ and y (1) (s k ) y < Aδ, then we can use the bound on the variation of f: y (1) (t 1 ) y () (t 1 ) = (s 2 s 1 ) f(s 1, y (1) (s 1 )) f(t, y ) + + (t 1 s k ) f(s k, y (1) (s k )) f(t, y ) ɛ(t 1 t ). In the remaining points the two discretizations are identical, hence we can use Lemma 1.2, which bounds the deviation of two polygons that start from different initial values: y(1) (T ) y () (T ) e L(T t 1) ɛ(t 1 t ). We next introduce another refinement by adding new points between t 1 and t 2 ; the resulting polygon is denoted by y (2) (t). In a similar way we show that y (2) (t 2 ) y (1) (t 2 ) ɛ(t 2 t 1 ),

13 A brief introduction to ordinary differential equations 13 hence y(2)(t ) y (1)(T ) e L(T t 2) ɛ(t 2 t 1 ). We keep on adding points between every two breakpoints until we have a fully refined polygon, y (n) (t), that satisfies h (n) < h (). The difference between the refined and original polygon is bounded by: y(n) (T ) y () (T ) ɛ [(t 1 t )e L(T t1) + + (T t i )e L(T T )]. This finite sum is a Riemann sum of the monotonically decreasing function e L(T t), based on a function evaluation at the right end of each interval. Thus, the Riemann sum can be bounded by the integral, and y(n) (T ) y () (T ) ɛ T t e L(T s) ds = ɛ L [ ] e L(T t) 1. By our construction it is clear that the difference, in the supremum norm, between the Euler polygon y () (t) and an arbitrary refinement y (n) (t) is sup y (n) (t) y () (t) ɛ [ ] e L(T t) 1. t T L Consider now two Euler polygons, y h (t), and z h (t), with δ-fine discretizations. Comparing them to a polygon that it a refinement of both, we get by the triangle inequality, sup y h (t) z h (t) 2 ɛ t T L [ ] e L(T t) 1. Thus, for every ɛ > there exists a δ > such that any two discretizations that are finer than δ differ by less than a constant times ɛ. This is Cauchy s condition for uniform convergence! ➁ Let ɛ(δ) be the modulus of continuity of f, ɛ(δ) sup { f(t m, y m ) f(t n, y n ) : t m t n δ, y m y n A δ}. The continuity of f implies that ɛ(δ) as δ. From Lemma 1.1 we have, or, y h (t + δ) y h (t) δ f(t, y h (t)) ɛ(δ) δ, y h (t + δ) y h (t) δ f(t, y h (t)) ɛ(δ), Take the limit h (so that y h (t) φ(t)), and then take δ. This proves that φ(t) is differentiable and satisfies the differential equation.

14 14 Chapter 1 ➂ To prove uniqueness we prove that any solution to the initial-value problem must be equal to the limit of the polygon solutions. Let ψ(t) be a solution to ψ (t) = f(t, ψ(t)), ψ(t ) = y, i.e., for any t i and t > t i, ψ(t) = ψ(t i ) + t t i f(s, ψ(s)) ds. Given ɛ > we assume a partition that satisfies the condition on the variation of f. Denote now by y (i) (t) the Euler polygon whose initial condition is y (i) (t i ) = ψ(t i ), then ψ(t) y(i) (t) ɛ t ti for t [t i, t i+1 ]. Construct now the telescopic sum and use Lemma 1.2, ψ(t) y h (t) = ψ(t) y(i) (t) + y (i) (t) + y (1) (t) y h (t) ɛ(t t i ) + e L(t ti 1) ɛ(t i t i 1 ) + e L(t ti 2) ɛ(t i 1 t i 2 ) + + e L(t t1) ɛ(t 1 t ) ɛ [ ] e L(t t) 1. L In the limit h and ɛ, ψ(t) φ(t) =. The above existence and uniqueness theorem is local; it applies only for T not too far from the initial time t. This theorem can be extended to a global existence and uniqueness theorem by considering the end-point as a new initial point and continue the solution. Theorem 1.2 Let U be an open set in R 2 and f(t, y) be Lipschitz with constant L on U. Then, for every (t, y ) U, there exists a unique solution of the differential equation, which can be continued up the boundary of U. Exercise 1.5 Show that y = t 2 /4 and y = 1 t are both solutions of the initial-value problem, 2y = t 2 + 4y t y(2) = 1. Why doesn t this contradict the uniqueness theorem?

15 A brief introduction to ordinary differential equations Error estimate for Euler s polygons In the course of the proof of Cauchy s theorem we showed that for any ɛ > and a discretization fine enough in terms of ɛ, the difference between the Euler polygon, y h (t), and any other refined Euler polygon, yĥ(t), is bounded by, y h (t) yĥ(t) ɛ [ ] e L(t t) 1. L In particular, this estimate is also a bound on the error with respect to the true solution, y(t): y h (t) y(t) ɛ [ ] e L(t t) 1. L This estimate can be refined for the case where f(t, y) is differentiable: Recall that L in the above bound is the Lipschitz constant of f(t, y), i.e., f(t, y) f(t, z) L y z in a neighborhood of the solution. ɛ is arbitrary, and h = max i (t i+1 t i ) has the property that for every t, s such that t s h, and y, z such that y z A h, f(t, y) f(y, z) ɛ, where A is a bound on f. This leads to the following refinement in the case where f(t, y) is differentiable: Theorem 1.3 Suppose that in the neighborhood of the solution f A f t M f y L, then, y h (t) y(t) M + AL L [ ] e L(t t) 1 h. That is, Euler s polygon is a first-order approximation to the solution y(t); the error scales with the mesh size h. Proof : Let t s h and y z A h, then by the mean value theorem, f(t, y) f(y, z) f/ t h + f/ y A h (M + AL) h. this establishes the relation between ɛ and h. In Figure 1.2 we display approximate solutions of the differential equation y = 1 3t + y + t 2 + ty with initial value y(.85) = 2 with four different (uniform) step sizes. A predicted, the smaller h is, the closer is the Euler solution to the exact solution (solid line).

16 16 Chapter y t Figure 1.2: Arrows: the vector field y = 1 3t + y + t 2 + ty. Solid line: solution for the initial value y(.85) = 2. The dotted lines correspond to four Euler approximations with uniform step sizes h =.2, h =.1, h =.5, and h =.25. Exercise 1.6 Apply Euler s method with constant step size, h, to the differential equation, y = y y() = 1. Obtain an approximation to the solution at t = 1, calculate its deviation from the exact solution, and compare the deviation to the error estimate that was derived above. Establish the dependence of the error on h by repeating the integration for various steps sizes. Exercise 1.7 Approximate the solution to the equation y = 1 3t + y + t 2 + ty, y(.85) = 2, at time t = 1 using the Euler method with fixed step size h. Use the values h = {.1,.5,.1,.5,.1,.5,.1}. Treat the solution with h =.1 as the truth, so that you can estimate the error in the evaluation of y(1) as a function of h. Plot the graph error(h) on a log-log coordinate system (equivalently, plot the logarithm of the error versus the logarithm of h). Explain your observations, and related them to the analytical error estimate. This exercise should teach you to estimate the order of convergence from numerical experiments.

17 A brief introduction to ordinary differential equations Vector and matrix norms Before extending Cauchy s convergence theorem to systems of equations, we need to define vector and matrix norms, as a generalization to absolute values Definitions Definition 1.1 A vector norm,, is a function R n R (or C n R), that satisfies the following properties: ➀ x, x R n. ➁ x = iff x =. ➂ λx = λ x. ➃ x + y x + y, x, y R n. Example 1.5 The L p norm, denoted by p, is defined by: for p 1. ( n ) 1/p y p y p i, i=1 Definition 1.2 Let be a vector norm. Its subordinate matrix norm (the operator norm) is defined by Ax A sup x x. Exercise 1.8 Prove that for any subordinate matrix norm: AB A B, and that from that follows e A e A The Euclidean norm The Euclidean norm is the L 2 norm: ( n ) 1/2 y y i 2, i=1 where we consider the more general case of y C n. The Euclidean norm is related to the Euclidean inner product: y 2 = (y, y),

18 18 Chapter 1 where n (u, v) u i v i. i=1 What is the matrix norm subordinate to the L 2 vector norm? A 2 (Ax, Ax) = sup = sup x (x, x) x (A Ax, x), (x, x) where the superscript denotes the Hermitian transpose. Lemma 1.3 The eigenvalues of A A are all real and positive. Proof : Let the pair (λ, u) be an eigenvalue of A A with the corresponding eigenvector. Then, A Au = λu, and (u, A Au) = λ(u, u), from which immediately follows that hence λ is real and positive. Au 2 = λ u 2, Lemma 1.4 The eigenvectors of A A form an orthonormal basis in C n. Proof : This is a consequence of the spectral theorem for normal matrices, whereby every normal matrix B (i.e., BB = B B) has a n orthonormal set of n eigenvectors (see any linear algebra textbook for details). Let now x = i α iu i, where the u i are the orthonormal eigenvectors of A A with corresponding eigenvalue λ i. Then, (A Ax, x) (x, x) = i,j α iαj λ i(u i, u j ) i i,j α iαj (u = α i 2 λ i i, u j ) i α i 2. This ratio will be maximal if x is the eigenvector that corresponds to the largest eigenvalue. Hence, where ρ( ) denotes the spectral radius. A 2 = max λ i = ρ ( A A ), (1.9) i

19 A brief introduction to ordinary differential equations The logarithmic norm Let be a subordinate matrix norm. We define the corresponding logarithmic norm (or log-norm) of a matrix: I + ha 1 µ(a) lim. h + h Example 1.6 What is the log-norm that corresponds to L 2? ( ρ (I + ha) (I + ha) ) 1/2 ( 1 ρ I + h(a + A ) ) 1/2 1 µ(a) = lim = lim. h + h h + h But since ρ ( I + h(a + A ) ) = 1 + hρ(a + A ), then [1 + h ρ(a + A )] 1/2 1 µ(a) = lim = 1 h + h 2 ρ(a + A ). Note that the log-norm is not a norm! It is not necessarily positive. Its importance stems from the following fact: Proposition 1.1 Let µ(a) be the log-norm associated with a subordinate matrix norm,. Then e At e µ(a)t, and the bound is sharp; µ(a) is the smallest constant for which this inequality holds. Proof : Since then e A(t+h) = e At (1 + ha) + e At k= e A(t+h) e At 1 + ha + h 2 e At A 2 and e A(t+h) e At h Taking the limit h + : h k+2 A k+2, (k + 2)! k= h k A k (k + 2)!, I + ha 1 e At + h e At A 2 e A t. h d dt eat µ(a) e At, with initial condition e = 1. From this differential inequality (see below) follows that e At e µ(a)t.

20 2 Chapter 1 Exercise 1.9 Express explicitly the log-norm corresponding to L. Exercise 1.1 Prove the following properties of the logarithmic norm: ➀ µ(αa) = α µ(a) for α. ➁ A µ(a) A. ➂ µ(a + B) µ(a) + µ(b). ➃ µ(a) µ(b) A B. 1.6 Existence and uniqueness for systems of equations Cauchy s existence and uniqueness theorem can be extended to systems of equations. We first formulate the system of equations in vector notation. A system of n first-order equations can be written in the following form, { y = f(t, y) y(t ) = y (1.1) where y(t) = (y 1 (t),..., y n (t)) T y(t ) = y = (y,1,..., y,n ) T f(t, y) = (f 1 (t, y),..., f n (t, y)) T. The goal is to find the value of the vector of functions, y(t), at a prescribed value of T t. The first step is to generalize Euler s scheme. Define a partition, t < t 1 < < t n = T, and construct a piecewise-linear function, y(t) = y i + (t t i ) f(t i, y i ) t [t i, t i+1 ] (note that the y i are vectors, y i = (y i,1,..., y i,n ) T ). Now, everything that was done for the scalar equation can be applied, almost verbatim, to the vector case. Since all (finite dimensional vector norms are equivalent, will refer to any vector norm. Lemma 1.5 Let y = f(t, y) be a system of n equations, with y(t ) = y. If f(t, y) A, then: 1. y h (t) y A t t.

21 A brief introduction to ordinary differential equations If f(t, y) f(t, y ) ɛ in a convex region, U, that contains y h (t), then y h (t) y (t t )f(t, y ) ɛ t t. Lemma 1.6 Given a partition of the interval [t, T ], let y h (t) and z h (t) be the Euler polygons that correspond to different initial condition, y and z. If the function is Lipschitz, i.e., f(t, y) f(t, z) L y z, for all points in a convex region that contains the two polygons, then z h (t) y h (t) e L(t t) z y. Theorem 1.4 Let f(t, y) be Lipschitz with constant L, and let f A. Then Euler s polygons converge uniformly to a continuously-differentiable vector-valued function that satisfies the differential system, and is its unique solution. 1.7 Differential inequalities Motivation We have approximated the solution y(t) to the system of differential equations, y = f(t, y), by an Euler polygon, y h (t). One is interested in analyzing the error, e(t) y h (t) y(t), as a function of t and h. Can we write a differential equation for e(t)? Note that this function is generally not differentiable. There are corners both due to the polygon and due to the norm. Instead, we introduce the following analogs of one-sided derivative: Definition 1.3 Let u(t) be a real-valued function. defined by: Its Dini derivatives are D + u(t) lim sup h + D u(t) lim sup h D + u(t) lim inf h + D u(t) lim inf h u(t + h) u(t) h u(t + h) u(t) h u(t + h) u(t) h u(t + h) u(t). h

22 22 Chapter 1 Let now w(t) be a vector of functions that have right-derivatives. From the triangle inequality, w(t + h) w(t) w(t + h) w(t). Dividing by h and taking h + we find, w (t) D + w(t) and w (t) D + w(t), where w (t) here denotes the right derivative. If we apply this inequality to the error, e(t) y h (t) y(t), then D + e(t) y h(t) y (t) = f(t i, y h (t i )) f(t, y(t)) = f(t i, y h (t i )) f(t, y h (t)) + f(t, y h (t)) f(t, y(t)) ɛ + L e(t), where L is the Lipschitz constant and ɛ = ɛ(δ) is the bound of the variation of f(t, y) if the variations of t and y are bounded by δ and Aδ, respectively. This equation almost looks like a differential inequality. It also has an initial condition, e(t ) =. Imagine that we replaced the Dini derivative by a standard derivative, and the inequality by an equality. The solution to e (t) = L e(t) + ɛ, e(t ) = is e(t) = ɛ L [ ] e L(t t) 1. If we reverted now to an inequality we would obtain a bound on the error, which is indeed the correct one. In the following subsection we show how such a procedure can be made rigorous Theorems on differential inequalities Theorem 1.5 Suppose that the functions u(t) and v(t) were continuous, and that for all t [t, T ] there exists a function g(t, y) such that Then, D + v(t) g(t, v(t)) D + u(t) > g(t, u(t)) v(t ) u(t ). v(t) u(t) t [t, T ]. (The same holds if D + is replaced by D +.)

23 A brief introduction to ordinary differential equations 23 Proof : Initially, v(t ) u(t ). Since the two functions are continuous, for the theorem not to hold, there must be a point at which the curves cross. By contradiction, assume a point t 2 for which v(t 2 ) > u(t 2 ), and define t 1 to be the first point on the left of t 2 at which the two curves intersect (t 1 could coincide with t ). For all < h < t 2 t 1 : v(t 1 + h) v(t 1 ) h and taking the limit h + : This contradicts the assumptions that: > u(t 1 + h) u(t 1 ), h D + v(t 1 ) D + u(t 1 ). D + v(t 1 ) g(t 1, v(t 1 )) = g(t 1, u(t 1 )) < D + u(t 1 ). We are now in measure to apply differential inequalities to bound the error of general approximation schemes: Theorem 1.6 Suppose that v(t) approximates the solution of a system of differential equations, y = f(t, y), y(t ) = y, and satisfies: v(t ) y(t ) e v (t) f(t, v(t)) ɛ f(t, v) f(t, y) L v y, where v (t) can be a right-sided derivative. Then for t t we have the error estimate: y(t) v(t) e e L(t t) + ɛ [ ] e L(t t) 1. L Comment: (1) The first condition bounds the error in the initial conditions; so far, we have always taken it to be zero. (2) The second condition is a bound on the defect of the approximation; how well does it satisfy the differential equation. (3) The third condition is the usual Lipschitz bound. (4) The error estimate has two contributions: one from the error in the initial condition, and one from the fact that v(t) does not satisfy the differential equation exactly. Proof : Let e(t) y(t) v(t). As we have seen, D + e(t) L e(t) + ɛ g(t, e(t)), e(t ) = e. To apply the above theorem, we need to compare e(t) to another function, u(t), for which u > g(t, u(t)), u(t ) = e.

24 24 Chapter 1 For any η >, has the necessary properties, hence u = L u + ɛ + η, u(t ) = e e(t) u(t) = e e L(t t) + ɛ + η L [ ] e L(t t) 1. In particular, as η +, we obtain the required bound. Exercise 1.11 Prove a variant of the above theorem, that the conditions: v(t ) y(t ) e v (t) f(t, v(t)) δ(t) f(t, v) f(t, y) l(t) v y. imply for t t : t ] t y(t) v(t) e [e L(t) + e L(s) δ(s) ds, L(t) = l(s) ds. t t 1.8 Linear systems Consider a linear ode. If the coefficient of the highest derivative, y (n), does not vanish (the equation has no singular points), we can write a general n th-order linear equation as y (n) + a 1 (t) y (n 1) + + a n (t) y = f(t). (1.11) If we define, y 1 (t) = y(t), y 2 (t) = y (t), until y n (t) = y (n 1) (t), then we obtain a system of equations, y 1 1 y 1 y 2 1 y 2 = y n a n (t) a n 1 (t) a 1 (t) y n which we write as a system: y = A(t) y + f(t) (with a slight abuse of notation for f(t)).. f(t), (1.12) The right-hand side is obviously Lipschitz in y, and therefore, if A(t) and f(t) are bounded, then the conditions for existence and uniqueness are satisfied. In particular, the continuity of A(t) and f(t) will ensure boundedness on any compact set, t [t, T ].

25 A brief introduction to ordinary differential equations 25 Theorem 1.7 For a homogeneous system, f(t) =, the solution y(t) depends linearly on the initial values. That is, there exists an n n matrix, R(t, t ), such that y(t) = R(t, t ) y(t ). Proof : The time evolution operator, T (t, t ), maps a vector in R n to another vector in R n. Namely, y(t) = T (t, t )y(t ). Take now two solutions, u 1 (t) and u 2 (t). a u 1 (t) + b u 2 (t) is also a solution, hence, By the linearity of the equation, But on the other hand, a u 1 (t) + b u 2 (t) = T (t, t ) [a u 1 (t ) + b u 2 (t )]. a u 1 (t) + b u 2 (t) = a T (t, t )u 1 (t ) + b T (t, t )u 2 (t ). This proves that T is a linear mapping, i.e., it can be represented as a matrix. By uniqueness, a solution that starts at (t, y(t )), or a solution that start at (t 1, R(t 1, t )y(t )), must be the same, i.e., y(t 2 ) = R(t 2, t )y(t ) = R(t 2, t 1 )R(t 1, t )y(t ), hence, R(t 2, t ) = R(t 2, t 1 )R(t 1, t ) t < t 1 < t 2. By integrating backwards, letting t = t 1 t, we must arrive by uniqueness to the starting point, hence, R(t, t 1 ) = [R(t 1, t )] The Wronskian Assume a homogeneous system y = A(t)y, or component-wise, y j = n a jk (t) y k. k=1 Suppose that we know n independent functions, y i (t), with components y i j (t), that satisfy the equation, (y i j) = n a jk (t)yk(t), i i = 1, 2,..., n. k=1

26 26 Chapter 1 We define the Wronskian matrix: y1(t) 1 y1(t) 2 y1 n (t) y2(t) 1 y2(t) 2 y2 n (t) W (t) =...., yn(t) 1 yn(t) 2 yn(t) n Each column of the Wronskian is a solution to the equation, hence W (t) satisfies the matrix equation W (t) = A(t) W (t). The solutions of the system are spanned by the n columns of the Wronskian. In other words, any solution, y(t), must be of the form: y 1 (t) y1(t) 1 y1(t) 2 y1 n (t) c 1 y 2 (t) y2(t) 1 y2(t) 2 y2 n (t). y n (t) or in matrix notation,. = c , yn(t) 1 yn(t) 2 yn(t) n y(t) = W (t) c. If y(t ) is given, then the vector c can be found by inverting the Wronskian, hence, c = W 1 (t )y(t ), y(t) = W (t)w 1 (t )y(t ) = R(t, t ) = W (t)w 1 (t ). All solutions are therefore known if one finds n independent solutions. c n Inhomogeneous linear systems Suppose we knew how to solve the homogeneous system, y = A(t)y. As noted, one only needs to know any set of n independent solutions to construct a Wronskian matrix. Any solution is then represented by the matrix R(t, t ) = W (t)w 1 (t ). Consider now the inhomogeneous system, y = A(t) y + f(t). Liouville (1838) proposed a solution based on looking for a solution of the homogeneous equation, with the constant vector c replaced by a function of t (variation of constants): y(t) = W (t)c(t).

27 A brief introduction to ordinary differential equations 27 Substitute this ansatz into the equation: y = AW c + f = W c + W c. But W = AW, hence or c(t) = c = W 1 (t)f(t), t t W 1 (s)f(s) ds + C. The solution is then: y(t) = W (t) t t W 1 (s)f(s) ds + W (t)c, and the constant vector C is obtained by the initial conditions, C = W 1 (t )y(t ). From that we deduce the following theorem: Theorem 1.8 Let A(t) and f(t) be continuous functions. The solution of y = A(t) y + f(t) with initial condition y(t ) is given by t w(t) = R(t, t )y(t ) + R(t, s)f(s) ds, t where R(t, s) is the solution operator of the homogeneous system. This formula is known as Duhammel s principle; it gives the solution of a linear inhomogeneous system in terms of the solution of the homogeneous part. Exercise 1.12 Verify explicitly that t w(t) = R(t, t )y(t ) + R(t, s)f(s) ds t is a solution of the system y = A y + f, where R(t, t ) = W (t)w 1 (t ) Systems with constant coefficients A class of linear systems that can be solved exactly, i.e., the matrix R(t, t ) can be calculated, is the case where all the coefficients are constants, i.e., A does not depend on t.

28 28 Chapter 1 Motivation The interest in such systems is not only because of their restricted use, but mainly because of their application to local analysis of nonlinear systems. Consider a system of nonlinear and autonomous equations: y i = f i (y 1,..., y n ), and imagine that there is a vector y, which with no loss of generality will be taken to be the zero vector, for which all f i vanish. That is, y(t) = is a stationary solution (or a fixed point) of the system. One is often interested in the behavior of the solution in the vicinity of stationary points. We assume that y(t ) is very close to this point, and analyze the solution for t > t as long as it remains close to the stationary point. The function f can then be approximated by its first-order Taylor expansion: f i (y 1,..., y n ) n j=1 f i y j () y j, so that the system is approximated by a linear system with constant coefficients: n n y i f i a ij y j = () y j. y j j=1 Autonomous systems Autonomous systems are invariant under a shift of t. It doesn t matter if we integrate the initial data from t to t 1, or from to t 1 t, i.e., the matrix R(t, t ) satisfies the following symmetry: j=1 R(t, t ) = R(t t, ). Consider a (homogeneous) linear system with con- Solution by diagonalization stant coefficients: and look for a solution of the form y = A y, y(t) = v e λt, where v is a constant vector. Substitution yields, λv = A v, i.e., λ is an eigenvalue of the matrix A, and v is the corresponding eigenvector. Suppose now that there exist n eigenvectors that are linearly independent; let v j i denote the i th component of the j th eigenvector. Then, v1 1 v1 n v1 1 v n λ 1 1 A..... = vn 1 vn n vn 1 vn n }{{}. λ n }{{} T Λ

29 A brief introduction to ordinary differential equations 29 and Λ = T 1 AT. Consider now the coordinate transformation: The equation satisfied by z(t) is z(t) = T 1 y(t). z = T 1 y = T 1 A y = T 1 AT z = Λ z. Hence, every component of the vector z(t) satisfies an independent equation, whose solution is Back to vector notation: z 1 (t) z(t) =. z n (t) = z i = λ i z, z i (t) = e λit z i (). e λ1t z 1 () z n (). e λn t }{{} e Λt Reverting back to the original variables, we find which means that y(t) = T e Λt T 1 y(), R(t, ) = T e Λt T 1. It is left as an (easy) exercise to show that T e Λt T 1 = e At. Exercise 1.13 Show that for linear systems with constant coefficients, y = Ay, where A is diagonalizable, R(t, ) = e A t. Exercise 1.14 Compute the matrix R(t, t ), for the system and verify that it satisfies y 1 = y 2 y 2 = y 1 R(t 2, t ) = R(t 2, t 1 )R(t 1, t ) and R(t, t 1 ) = [R(t 1, t )] 1.

30 3 Chapter 1 Exercise 1.15 Find the solution of y 3y 4y = g(t) g(t) = { cos t t π 2 π 2 t, with initial conditions y() = y () = ; use the variation of constants. Non-diagonalizable case What if there is no similarity transformation that diagonalizes A? There is always one that transforms A to a Jordan canonical form, J = T 1 AT, such that (B 1 ) J =... (B r ) where each block B i is of the form, λ i 1 λ i 1 B i = As z = J z, the vector z can be partitioned into blocks, and each block can be solved independently. For the i th block, one finds: z 1 (t) e λit te λit t k 1 (k 1)! z 2 (t) eλit z 1 (). = e λit t k 2 (k 2)! eλit z 2 ()..... z k (t) e λit z k () 1.9 Local analysis We conclude this section with a few example that show possible behaviors of solutions in the vicinity of fixed points. We start with 2-by-2 linear systems, that are easy to visualize, and conclude with a nonlinear example which demonstrate the power of linearization and local analysis. λ i Example 1 Consider the following system of two linear equations with constant coefficients: y 1 = 3 y 1 y 2 y 2 = y 1 3 y 2. (1.13)

31 A brief introduction to ordinary differential equations Figure 1.3: Phase space flow field for system (1.13). In matrix notation, ( y1 y 2 ) = ( 3 ) ( 1 y1 1 3 y 2 ). The eigenvalues of the matrix A are λ 1,2 = 2, 4. Hence, there exists a linear transformation T : R 2 R 2, so that z 1 (t) = e 2t z 1 () z 2 (t) = e 4t z 2 () This means that no matter where we start, the solution decays exponentially towards the origin, which is a stationary point. This results from the fact that all eigenvalues are negative! How does the solution look when we map back to the original variables, y? Because T maps the origin onto itself, the fundamental behavior is the same. The Figure 1.3 shows the phase-space flow; indeed, all trajectories are attracted by the fixed point at the origin. Example 2 Consider the following system: y 1 = y 2 y 2 = y 1 y 2. (1.14) The matrix of coefficients, A, is ( ) 1 A =, 1 1 and its eigenvalues are the complex pair, λ = 1± 3i 2. Because the real part of the two eigenvalues is negative, the solution (in the z plane) will approach

32 32 Chapter Figure 1.4: Phase space flow field for system (1.14). the origin from all initial points. The imaginary part will induce a rotation. This behavior is again preserved by the linear transformation back to y. The trajectory flow is depicted in Figure 1.4: Example 3 Consider the system: y 1 = y 2 y 2 = y 1 y 2. (1.15) The matrix of coefficients, A, is ( ) 1 A =, 1 1 and its eigenvalues are λ = 1± 5 2. This time there is one negative and one positive eigenvalue. In the z plane one component grows exponentially while the other decays. This means that the solution will grow exponentially, unless the coefficient of this solution is zero. The phase-space flow in the original y coordinates is depicted in Figure 1.5. Example 4 Consider a nonlinear system: y 1 = 1 3 (y 1 y 2 )(1 y 1 y 2 ) y 2 = y 1 (2 y 2 ), (1.16) which has four stationary points, (, ), (, 1), ( 1, 2), and (2, 2). Figure 1.6 shows the flow lines of this system of equations, and the solid lines are trajectories.

33 A brief introduction to ordinary differential equations Figure 1.5: Phase space flow field for system (1.15) Farjoun1 Farjoun y y1 Figure 1.6: Phase space flow field for system (1.16) and trajectories.

34 34 Chapter 1 Exercise 1.16 Analyze the behavior by local analysis near each stationary point. That is, linearize the system near every fixed point, and determine its stability by calculating the corresponding eigenvalues. Use your results to the determine the orientation of the trajectories in Figure 1.6.

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions Econ 4 Differential Equations In this supplement, we use the methods we have developed so far to study differential equations. 1 Existence and Uniqueness of Solutions Definition 1 A differential equation

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Half of Final Exam Name: Practice Problems October 28, 2014

Half of Final Exam Name: Practice Problems October 28, 2014 Math 54. Treibergs Half of Final Exam Name: Practice Problems October 28, 24 Half of the final will be over material since the last midterm exam, such as the practice problems given here. The other half

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture Notes for Math 524

Lecture Notes for Math 524 Lecture Notes for Math 524 Dr Michael Y Li October 19, 2009 These notes are based on the lecture notes of Professor James S Muldowney, the books of Hale, Copple, Coddington and Levinson, and Perko They

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

Asymptotic Stability by Linearization

Asymptotic Stability by Linearization Dynamical Systems Prof. J. Rauch Asymptotic Stability by Linearization Summary. Sufficient and nearly sharp sufficient conditions for asymptotic stability of equiiibria of differential equations, fixed

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

ODEs Cathal Ormond 1

ODEs Cathal Ormond 1 ODEs Cathal Ormond 2 1. Separable ODEs Contents 2. First Order ODEs 3. Linear ODEs 4. 5. 6. Chapter 1 Separable ODEs 1.1 Definition: An ODE An Ordinary Differential Equation (an ODE) is an equation whose

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide GENERAL INFORMATION AND FINAL EXAM RULES The exam will have a duration of 3 hours. No extra time will be given. Failing to submit your solutions

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Study guide - Math 220

Study guide - Math 220 Study guide - Math 220 November 28, 2012 1 Exam I 1.1 Linear Equations An equation is linear, if in the form y + p(t)y = q(t). Introducing the integrating factor µ(t) = e p(t)dt the solutions is then in

More information

Solutions to Problem Set 5 for , Fall 2007

Solutions to Problem Set 5 for , Fall 2007 Solutions to Problem Set 5 for 18.101, Fall 2007 1 Exercise 1 Solution For the counterexample, let us consider M = (0, + ) and let us take V = on M. x Let W be the vector field on M that is identically

More information

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS DYNAMICAL SYSTEMS AND NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS J. Banasiak School of Mathematical Sciences University of KwaZulu-Natal, Durban, South Africa 2 Contents 1 Solvability of ordinary differential

More information

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2) WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Econ Lecture 14. Outline

Econ Lecture 14. Outline Econ 204 2010 Lecture 14 Outline 1. Differential Equations and Solutions 2. Existence and Uniqueness of Solutions 3. Autonomous Differential Equations 4. Complex Exponentials 5. Linear Differential Equations

More information

ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 4884 NOVEMBER 9, 7 Summary This is an introduction to ordinary differential equations We

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

A Second Course in Elementary Differential Equations

A Second Course in Elementary Differential Equations A Second Course in Elementary Differential Equations Marcel B Finan Arkansas Tech University c All Rights Reserved August 3, 23 Contents 28 Calculus of Matrix-Valued Functions of a Real Variable 4 29 nth

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Picard s Existence and Uniqueness Theorem

Picard s Existence and Uniqueness Theorem Picard s Existence and Uniqueness Theorem Denise Gutermuth 1 These notes on the proof of Picard s Theorem follow the text Fundamentals of Differential Equations and Boundary Value Problems, 3rd edition,

More information

We shall finally briefly discuss the generalization of the solution methods to a system of n first order differential equations.

We shall finally briefly discuss the generalization of the solution methods to a system of n first order differential equations. George Alogoskoufis, Dynamic Macroeconomic Theory, 2015 Mathematical Annex 1 Ordinary Differential Equations In this mathematical annex, we define and analyze the solution of first and second order linear

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

MIDTERM REVIEW AND SAMPLE EXAM. Contents

MIDTERM REVIEW AND SAMPLE EXAM. Contents MIDTERM REVIEW AND SAMPLE EXAM Abstract These notes outline the material for the upcoming exam Note that the review is divided into the two main topics we have covered thus far, namely, ordinary differential

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

MATH 308 Differential Equations

MATH 308 Differential Equations MATH 308 Differential Equations Summer, 2014, SET 6 JoungDong Kim Set 6: Section 3.3, 3.4, 3.5, 3.6 Section 3.3 Complex Roots of the Characteristic Equation Recall that a second order ODE with constant

More information

Introduction to Initial Value Problems

Introduction to Initial Value Problems Chapter 2 Introduction to Initial Value Problems The purpose of this chapter is to study the simplest numerical methods for approximating the solution to a first order initial value problem (IVP). Because

More information

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17 6. Geodesics A parametrized line γ : [a, b] R n in R n is straight (and the parametrization is uniform) if the vector γ (t) does not depend on t. Thus,

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics Introductory Notes in Ordinary Differential Equations for Physical Sciences and Engineering Marcel B. Finan c All Rights

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear ODE s with periodic coefficients

Linear ODE s with periodic coefficients Linear ODE s with periodic coefficients 1 Examples y = sin(t)y, solutions Ce cos t. Periodic, go to 0 as t +. y = 2 sin 2 (t)y, solutions Ce t sin(2t)/2. Not periodic, go to to 0 as t +. y = (1 + sin(t))y,

More information

INDEX. Bolzano-Weierstrass theorem, for sequences, boundary points, bounded functions, 142 bounded sets, 42 43

INDEX. Bolzano-Weierstrass theorem, for sequences, boundary points, bounded functions, 142 bounded sets, 42 43 INDEX Abel s identity, 131 Abel s test, 131 132 Abel s theorem, 463 464 absolute convergence, 113 114 implication of conditional convergence, 114 absolute value, 7 reverse triangle inequality, 9 triangle

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS November 8, 203 ANALYTIC FUNCTIONAL CALCULUS RODICA D. COSTIN Contents. The spectral projection theorem. Functional calculus 2.. The spectral projection theorem for self-adjoint matrices 2.2. The spectral

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 23 2017 Separation of variables Wave eq. (PDE) 2 u t (t, x) = 2 u 2 c2 (t, x), x2 c > 0 constant. Describes small vibrations in a homogeneous string. u(t, x)

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Second Order Linear Equations

Second Order Linear Equations October 13, 2016 1 Second And Higher Order Linear Equations In first part of this chapter, we consider second order linear ordinary linear equations, i.e., a differential equation of the form L[y] = d

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input

Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input Order Preserving Properties of Vehicle Dynamics with Respect to the Driver s Input Mojtaba Forghani and Domitilla Del Vecchio Massachusetts Institute of Technology September 19, 214 1 Introduction In this

More information

First and Second Order Differential Equations Lecture 4

First and Second Order Differential Equations Lecture 4 First and Second Order Differential Equations Lecture 4 Dibyajyoti Deb 4.1. Outline of Lecture The Existence and the Uniqueness Theorem Homogeneous Equations with Constant Coefficients 4.2. The Existence

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Section 8.2 : Homogeneous Linear Systems

Section 8.2 : Homogeneous Linear Systems Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

ODE Final exam - Solutions

ODE Final exam - Solutions ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

Chapter 7. Homogeneous equations with constant coefficients

Chapter 7. Homogeneous equations with constant coefficients Chapter 7. Homogeneous equations with constant coefficients It has already been remarked that we can write down a formula for the general solution of any linear second differential equation y + a(t)y +

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Nonlinear Control Lecture 5: Stability Analysis II

Nonlinear Control Lecture 5: Stability Analysis II Nonlinear Control Lecture 5: Stability Analysis II Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2010 Farzaneh Abdollahi Nonlinear Control Lecture 5 1/41

More information

Part 1: Overview of Ordinary Dierential Equations 1 Chapter 1 Basic Concepts and Problems 1.1 Problems Leading to Ordinary Dierential Equations Many scientic and engineering problems are modeled by systems

More information

Math53: Ordinary Differential Equations Autumn 2004

Math53: Ordinary Differential Equations Autumn 2004 Math53: Ordinary Differential Equations Autumn 2004 Unit 2 Summary Second- and Higher-Order Ordinary Differential Equations Extremely Important: Euler s formula Very Important: finding solutions to linear

More information

Linear ODEs. Types of systems. Linear ODEs. Definition (Linear ODE) Linear ODEs. Existence of solutions to linear IVPs.

Linear ODEs. Types of systems. Linear ODEs. Definition (Linear ODE) Linear ODEs. Existence of solutions to linear IVPs. Linear ODEs Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems p. 1 Linear ODEs Types of systems Definition (Linear ODE) A linear ODE is a ifferential equation

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information