IV Higher Order Linear ODEs Boyce & DiPrima, Chapter 4 H.J.Eberl - MATH*2170 0
IV Higher Order Linear ODEs IV.1 General remarks Boyce & DiPrima, Section 4.1 H.J.Eberl - MATH*2170 1
Problem formulation for the next while we will be concerned with higher order linear differential equation, i.e. differential equations of type (1) d n y dt n +p n 1(t) dn 1 y dt n 1 +...+p 1(t) dy dt +p 0(t)y = g(t) where p i (t), i = 0,...,n 1,g(t) are continuous functions that are defined on an interval I IR we recall that he order of a differential equation is the order of the highest derivative appearing, i.e. the order of (1) is n an equivalent notation is (1 ) y (n) +p n 1 (t)y (n 1) +...+p 1 (t)y +p 0 (t)y = g(t) H.J.Eberl - MATH*2170 2
Problem formulation cont d important special cases that we consider are the homogeneous case g(t) 0, i.e. the equation (2) y (n) +p n 1 (t)y (n 1) +...+p 1 (t)y +p 0 (t)y = 0 and the constant coefficient case p i (t) = p i = const (3) y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = g(t) the initial value problem is to find a solution of d n y dt n +p n 1(t) dn 1 y dt n 1 +...+p 1(t) dy dt +p 0(t)y = g(t) that satisfies the n initial conditions y(t 0 ) = y 0, y (t 0 ) = y 1, y (t 0 ) = y 2,...,y (n 1) (t 0 ) = y n 1 where t 0 I and y i IR are given H.J.Eberl - MATH*2170 3
A first example: two spring-coupled masses governing equations differentiating twice substituting once mx 1 = kx 1 +k(x 2 x 1 ) = 2kx 1 +kx 2, mx 2 = k(x 2 x 1 ) kx 2 = 2kx 2 +kx 1. mx (4) 1 = 2kx 1 +kx 2 mx (4) 1 = 2kx 1 + ) ( 2 k2 m x 2 + k2 m x 1 H.J.Eberl - MATH*2170 4
A first example: two spring-coupled masses cont d using that to find mx (4) 1 = 2kx 1 + kx 2 = mx 1 +2kx 1 ( 2 k ) m (mx 1 +2kx 1 )+ k2 m x 1 rearrange and sort by derivatives of x 1 to obtain the 4th order linear homogeneous equation x (4) 1 +4 k m x 1 +3 k2 m 2x 1 = 0 H.J.Eberl - MATH*2170 5
General remarks many concepts that we have learnt for second order equations carry over in a straightforward manner, but they are often algebraically more involved 3 can be much bigger than 2!!! we will not learn new techniques for nth order equations, but see how our methods for 2nd order equations can be generalized to the nth order case the ideas behind the proofs for these statements/methods are typically the same as for 2nd order equations, but sometimes new difficulties arise; in our discussion we will emphasize these new difficulties H.J.Eberl - MATH*2170 6
In particular the following concepts carry over existence theory superposition for the homogeneous case linear independence of solutions of the homogeneous equation, fundamental systems, and general solution solution ansatz for the homogeneous constant coefficient equation method of undetermined coefficients for non-homogeneous constant coefficient equation methods of variation of the constant for the general case reduction of order to find another solution of the homogeneous equation if you have already one... but this is less practical because the reduced equation is now of order n 1 and still difficult to solve H.J.Eberl - MATH*2170 7
Existence and Uniqueness Theory instead of giving an existence and uniqueness theorem for the nth order linear initial value problem, we state a more general result recall from Part I: A higher order differential equation can always be converted into a system of first order differential equations in the case of a second order equation (1) y (n) = F(x,y,y,...,y (n 1) ) the new dependent variables z i := y (i 1) transform (1) into the system z 1 = z 2 (2) z 2 = z 3. z n 1 = z n z n = F(x,z 0,...,z n 1 ) H.J.Eberl - MATH*2170 8
Local Existence and Uniqueness Theorem for Systems of ODEs. Let I IR be an interval and let D IR n be a rectangle. Let f : I D D,(x,y) f(x,y) be a function that is continuous w.r.t. x and continously differentiable w.r.t. y. Let x 0 I, and y 0 D. Then the initial value problem of the system of differential equations y = f(x,y), y(x 0 ) = y 0 possesses a unique solution in some interval J I that contains x 0. a stronger result can be stated if one introduces a multi-dimensional Lipschitz condition, but this requires some Linear Algebra; as in the scalar case one can show that a continuously differentiable f always satisfies such a Lipschitz condition we won t give a proof of the above theorem here: the mathematical key ideas are the same as the one used in our discussion of the scalar case, but it is technically more involved H.J.Eberl - MATH*2170 9
Existence and uniqueness result for the linear equation. Let p i (t), i = 0,...,n 1, and g(t) be continuous functions on an interval I IR. Let t 0 I and y i IR, i = 0,...,n 1. Then the initial value problem with d n y dt n +p n 1(t) dn 1 y dt n 1 +...+p 1(t) dy dt +p 0(t)y = g(t) y(t 0 ) = y 0, y (t 0 ) = y 1, y (t 0 ) = y 2,...,y (n 1) (t 0 ) = y n 1 possesses a unique solution y(t) on the interval I. H.J.Eberl - MATH*2170 10
Superposition principle. Let y 1 (t) and y 2 (t) be two solutions of the homogeneous nth order equation over an interval I d n y ( ) dt n +p n 1(t) dn 1 y dt n 1 +...+p 1(t) dy dt +p 0(t)y = 0, then also z(t) = αy 1 (t)+βy 2 (t) for α,β IR is a solution of ( ). proof: take the first n derivatives of z, substitute them into the L.H.S of ( ), use that y 1 and y 2 are solutions and find that d n z dt n +p n 1(t) dn 1 z dt n 1 +...+p 1(t) dz dt +p 0(t)z = 0. this implies: if y 1 (t),...,y k (t) are solutions of the homogeneous problem and Y(t) a solution of the inhomogeneous problem then d n z dt n +p n 1(t) dn 1 z dt n 1 +...+p 1(t) dz dt +p 0(t)z = g(t) z(t) = Y(t)+a 1 y 1 (t)+...+a k y k (t) is a solution of the inhomogeneous problem for all choices a 1,...,a k H.J.Eberl - MATH*2170 11
Linear independence of solutions of the homogeneous equation (1) d n y dt +p n 1(t) dn 1 y n dt +...+p 1(t) dy n 1 dt +p 0(t)y = 0 let y i (t), with i = 1,...,k be functions; they are called linearly independent if the identity (2) w 1 y 1 (t)+...w k y k (t) = 0 for all t implies that w 1 =... = w k = 0 if there exist coefficients w 1,...,w k, not all zero, such that (2) is satisfied, then y 1 (t),...,y k (t) are called linearly dependent there are at most n linearly independent solutions y 1 (t),...,y n (t) of (1) a set of n linearly independent solutions is called a fundamental system of (1) testing for linear dependence is more involved for k > 2 than it was for k = 2 H.J.Eberl - MATH*2170 12
Example. The functions f 1 (x) = 1,f 2 (x) = x,f 3 (x) = 2+3x are linearly dependent: w 1 +w 2 x+w 3 (2+3x) = 0 for example for w 3 = 1, w 1 = 2, w 2 = 3 H.J.Eberl - MATH*2170 13
Example. The functions f 1 (x) = e x,f 2 (x) = xe x,f 3 (x) = x are linearly independent: we are looking for w 1,w 2,w 3 such that for all x w 1 e x +w 2 e x +w 3 x = 0 in particular, this means that this must be satisfied for three arbitrary choices for x, say x = 0,x = 1,x = 1: w 1 = 0 w 1 e+w 2 e+w 3 = 0 w 1 e w 2 e w 3 = 0 thus hence w 2 e+w 3 = 0 w 2 e w 3 e 2 = 0 w 3 (1 e 2 ) = 0 = w 3 = 0 = w 2 = 0 H.J.Eberl - MATH*2170 14
Comment. The approach used in the previous example works to test for linear independence but not to test for linear dependence, as shown in this example: f 1 (x) = 1 x 2, f 2 (x) = cos π 2 x, f(x 3) = x choosing x = 0,±1 gives w 1 +w 2 = 0, w 3 = 0, w 3 = 0. This is obviously true for all w 1 = w 2 and w 3 = 0. Nevertheless, the three functions are not linearly dependent: If one chooses instead x = 0,1,2 one has w 1 +w 2 = 0 that w 1 = w 2 = w 3 = 0. w 3 = 0 3w 1 w 2 +2w 3 = 0 H.J.Eberl - MATH*2170 15
IV Higher Order Linear ODEs IV.2 Homogeneous equations with constant coefficients Boyce & DiPrima, Section 4.2 H.J.Eberl - MATH*2170 16
Problem formulation and ansatz we consider problems of the type (1) y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = 0 solutions y of this equation must be representable as linear combinations of their derivatives as in the case of second order equations, we make the ansatz y(x) = e rx, where r is yet to be determined we have y(x) = e rx = dk y dx k = rk e rx substituting into ( ) and factoring out e rt gives (2) e rx( r n +p n 1 r n 1 +...+p 1 r +p 0 ) = 0 H.J.Eberl - MATH*2170 17
Problem formulation and ansatz cont d because e rx > 0 for all r IR we have that a solution of type y(x) = e rx exists for r such that (3) r n +p n 1 r n 1 +...+p 1 r+p 0 = 0 this is called the characteristic polynomial of (1) it has at most n roots r 1,...,r n we have to distinguish several cases, depending on whether (i) the roots are all distinct and real, (ii) the roots contain complex numbers (iii) there are roots with multiplicity > 1 note that (ii) and (iii) do not exclude each other H.J.Eberl - MATH*2170 18
Case I: All roots are distinct and real (1) y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = 0 (3) r n +p n 1 r n 1 +...+p 1 r +p 0 = 0 in this case we have n different r i which define the n solutions y i (x) = e r 1x these solutions are linearly independent and form a fundamental system the general solution of (1) is y(t) = c 1 y 1 (t)+...+c n y n (t) = c 1 e r1t +...+c n e r nt H.J.Eberl - MATH*2170 19
Case I: All roots are distinct and real, cont d to solve the initial value problem (1) y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = 0 y(t 0 ) = y 0, y (t 0 ) = y 1, y (t 0 ) = y 2,...,y (n 1) (t 0 ) = y n 1 we need to find the constants c 1,...,c n such that c 1 e r 1t 0 +...+c n e r nt 0 = y 0 c 1 r 1 e r 1t 0 +...+c n r n e r nt 0 = y 1. c 1 r1 n 1 e r 1t 0 +...+c n rn n 1 e r nt 0 = y n 1 this is a system of n linear equations for n unknowns that must be solved, e.g. by Gauß elimination H.J.Eberl - MATH*2170 20
Comments finding the roots of the characteristic polynomial for n > 2 is in general not easy r n +p n 1 r n 1 +...+p 1 r+p 0 = 0 for n = 3 there are length look-up tables sometimes one can guess one root and reduce the order of the polynomial in Linear Algebra and Number Theorem there are some results that allow to obtain estimates for r i... these estimates are often not very sharp in applications it is often important to know whether all roots are negative or whether also positive roots exist... there are theorems to find this out which may or may not be useful in specific cases, such as Descartes rule of sign, the Routh Hurwitz criterion, etc. computer algebra systems might be useful while the theory is the same as for 2nd order equations, solving higher order equations explicitly might not be possible H.J.Eberl - MATH*2170 21
Example: y 2y y +2y = 0 the characteristic polynomial is r 3 2r 2 r +2 = 0 its roots are the general solution is r 1 = 1, r 2 = 1, r 3 = 2 y(x) = c 1 e x +c 2 e x +c 3 e 2x to solve the initial value problem for y(0) = 1, y (0) = 0, y (0) = 1 we need to find c 1,c 2,c 3 such that thus c 1 +c 2 +c 3 = 1 c 1 c 2 +2c 3 = 0 c 1 +c 2 +4c 3 = 1 c 3 = 0, c 2 = 1/2, c 1 = 1/2 = y(t) = 1 2 ( e x +e x) = coshx H.J.Eberl - MATH*2170 22
Example: y 2y y +2y = 0, cont d 10 8 6 4 2 0-4 -2 0 2 4 x cosh(x) y1(x) y2(x) y3(x) H.J.Eberl - MATH*2170 23
Example: y 2y y +2y = 0, cont d to solve the initial value problem for y(0) = 0, y (0) = 1, y (0) = 0 we need to find c 1,c 2,c 3 such that we find immediately c 1 +c 2 +c 3 = 0 c 1 c 2 +2c 3 = 1 c 1 +c 2 +4c 3 = 0 c 3 = 0 = c 1 = c 2 = 2c 1 = 1 and thus y(t) = 1 2 ( e x e x) = sinhx H.J.Eberl - MATH*2170 24
Example: y 2y y +2y = 0, cont d 4 2 0-2 -4 sinh(x) y1(x) y2(x) y3(x) -4-2 0 2 4 x H.J.Eberl - MATH*2170 25
Case II: pairs of complex conjugate roots (1) y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = 0 (3) r n +p n 1 r n 1 +...+p 1 r +p 0 = 0 now we consider the case of a pair of complex conjugate roots r 1,2 = λ±iµ using Euler s formulae, this gives two complex solutions z 1,2 (x) = e (λ±iµ)x = e λx (cosµx±isinµx) using the superposition principle, and the same reasoning as in the case of 2nd order equations, we can build from this two real solutions y 1 (x) = 1 2 (z 1(x)+z 2 (x)) = e λx cosµx y 2 (x) = i 2 (z 2(x) z 1 (x)) = e λx sinµx H.J.Eberl - MATH*2170 26
Comments complex roots always appear as conjugate pairs, in particular their number is always even complex and real roots can appear together: if the ODE is of odd order, there has to be at least one real root of the characteristic polynomial other problems might have only complex roots (several pairs) H.J.Eberl - MATH*2170 27
Example: EIy (4) ky = 0 this equation arises when studying transverse vibrations of a beam where y(x) is related to the displacement of the beam at position x, constant E is the Young s modulus, I is the area moment of inertia, and k parameter we write this as the characteristic polynomial has the roots d 4 y k ay = 0, where a := dx4 EI r 4 a = 0 r 1 = b, r 2 = b, r 3 = ib, r 4 = ib with b = a 1/4 = ( ) 1/4 k EI H.J.Eberl - MATH*2170 28
Example: EIy (4) ky = 0, cont d the two real roots r 1,2 = ±b imply the two real solutions y 1 (x) = e bx, y 2 (x) = e bx the two complex roots r 3,4 = ±ib imply the two real solutions thus we have the general solution y 3 (x) = cosbx, y 4 (x) = sinbx y(x) = c 1 e bx +c 2 e bx +c 3 cosbx+c 4 sinbx using coshz = 1 2 (ez + e z ),sinhz = 1 2 (ez e z ) we can also write the general solution in the form y(x) = c 1 coshbx+c 2 sinhbx+c 3 cosbx+c 4 sinbx H.J.Eberl - MATH*2170 29
Example: EIy (4) ky = 0, cont d 10 5 0-5 -10-4 -2 0 2 4 bx cosh bx sinh bx cos bx sin bx H.J.Eberl - MATH*2170 30
Example: y +y +3y 5y = 0 the characteristic polynomial is r 3 +r 2 +3r 5 = 0 a first root is guessed/recognized as r 1 = 1 the remaining two roots r 2,3 can be determined from r 3 +r 2 +3r 5 = (r 1)(r r 2 )(r r 3 ) = comparing coefficients: gives general solution = r 3 (r 2 +r 3 +1)r 2 +(r 2 +r 3 +r 2 r 3 )r r 2 r 3 = 0 r 2 r 3 = 5, r 2 +r 3 = 2 r 2,3 = 2± 4 20 2 = 1±2i y(x) = c 1 e x +c 2 e x cos2x+c 3 e x sin2x H.J.Eberl - MATH*2170 31
Example: y +y +3y 5y = 0 10 5 0-5 -10-4 -2 0 2 4 x exp(x) exp(-x)*cos(x) exp(-x)*sin(x) H.J.Eberl - MATH*2170 32
Case III: roots with multiplicity > 1 y (n) +p n 1 y (n 1) +...+p 1 y +p 0 y = 0 r n +p n 1 r n 1 +...+p 1 r +p 0 = 0 accounting for multiplicity, the characteristic polynomial has n roots real roots can have multiplicity up to n, complex roots up to n/2 let r 1 be a root with multiplicity m then we find the m linearly independent solutions e r 1x, xe r 1x, x 2 e r 1x,..., x m 1 e r 1x this can be formally verified by some tedious algebra... we skip that step this also works for complex roots: if α+iβ is a root with multiplicity m, then we have the 2m linearly independent solutions e αx cosβx, xe αx cosβx, x 2 e αx cosβx,..., x m 1 e αx cosβx e αx sinβx, xe αx sinβx, x 2 e αx sinβx,..., x m 1 e αx sinβx H.J.Eberl - MATH*2170 33
Example: y (4) y (3) 3y +5y 2y = 0 characteristic polynomial r 4 r 3 3r 2 +5r 2 = (r 1) 3 (r+2) = 0 roots are r 1,2,3 = 1, r 4 = 2 general solution y(x) = c 1 e x +c 2 xe x +c 3 x 2 e x +c 4 e 2x H.J.Eberl - MATH*2170 34
Example: y (4) y (3) 3y +5y 2y = 0 10 8 exp(x) x*exp(x) x*x*exp(x) exp(-2*x) 6 4 2 0-5 -4-3 -2-1 0 1 2 3 x H.J.Eberl - MATH*2170 35
Example: y (4) 8y (3) +26y 40y +25y = 0 characteristic polynomial r 4 8r 3 +26r 2 40r +25 = (r 2 4r +5) 2 = 0 the roots of r 2 4r+5 are double roots of the characteristic polynomial we find general solution r 1,2 = 4± 16 20 2 = 2±i, r 3 = r 1, r 4 = r 2 y(x) = c 1 e 2x cosx+c 2 e 2x sinx+c 3 xe 2x cosx+c 4 xe 2x sinx H.J.Eberl - MATH*2170 36
Example: y (4) 8y (3) +26y 40y +25y = 0 10 5 0-5 y1(x) y2(x) y3(x) y4(x) -10-1 -0.5 0 0.5 1 1.5 2 2.5 3 x H.J.Eberl - MATH*2170 37
IV Higher Order Linear ODEs IV.3 The Method of Undetermined Coefficients Boyce & DiPrima, Section 4.3 H.J.Eberl - MATH*2170 38
General remarks we consider y (n) +p n 1 y (n 1) +...p 1 y +p 0 y = g(t) with p i = const and non constant forcing function g(t) the idea is the same as for second order equations: if g(t) is a polynomial, exponential, sine or cosine or a sum and/or product of such functions we can make a corresponding ansatz for a particular solution Y(t) and find the a priori undetermined coefficients by substitution and comparison the general solution is then found as y(t) = c 1 y 1 (t)+...+c n y n (t)+y(t) where the functions y 1 (t),...,y n (t) are a fundamental system of the homogeneous equation y (n) +p n 1 y (n 1) +...p 1 y +p 0 y = 0 H.J.Eberl - MATH*2170 39
General remarks cont d the solution to the initial value problem is then found by choosing the c i such that the initial conditions are satisfied the underlying theory is exactly the same as in the second order case, but algebraically slightly more involved the following practical difficulties arise when using the method we must find a fundamental system of the homogeneous equation, i.e. find all the roots of a polynomial of degree n for higher order equations more work is incurred by taking higher derivatives of the ansatz function to determine the solution of the initial value problem a system of n linear equations for n unknowns must be solved H.J.Eberl - MATH*2170 40
Example: y 3y +3y y = 4e t characteristic polynomial r 3 = 3r 2 +r 1 = (r 1) 3 = r 1,2,3 = 1 and we have the fundamental system for the homogeneous equation y 1 (t) = e t, y 2 (t) = te t, y 3 (t) = t 2 e t for the particular solution we start with the ansatz Y(t) = At 3 e t [note that the simpler ansatz Y(t) = Ae t won t work because this is a solution of the homogeneous problem, ditto for Y(t) = Ate t and Y(t) = At 2 e t ] we have then Y (t) = Ae t( t 3 +3t 2), Y (t) = Ae t( t 3 +6t 2 +6t ), Y (t) = Ae t( t 3 +9t 2 +18t+6 ) H.J.Eberl - MATH*2170 41
Example: y 3y +3y y = 4e t cont d substituting into the ODE, factoring out e t, we find Ae t[( t 3 +9t 2 +18t+6 ) 3 ( t 3 +6t 2 +6t ) +3 ( t 3 +3t 2) t 3] = 4e t which reduces to 6Ae t = 4e t = A = 2/3 and we have the particular solution and the general solution Y(t) = 2 3 t3 e t y(t) = c 1 e t +c 2 te t +c 3 t 2 e t + 2 3 t3 e t H.J.Eberl - MATH*2170 42
Example: y (4) +2y +y = 3sint 5cost the characteristic polynomial is r 4 +2r 2 +1 = (r 2 +1) 2 = 0 with roots r 1,2 = i,r 3,4 = i and fundamental system of the homogeneous equation y 1 (t) = sint, y 2 (t) = cost, y 3 (t) = tsint, y 4 (t) = tcost the ansatz for the particular solution is Y(t) = At 2 sint+bt 2 cost taking the first four derivatives of Y, substituting into the ODE, collecting terms we find 8Asint 8Bcost = 3sint 5cost and, therefore, the particular solution Y(t) = 3 8 t2 sint+ 5 8 t2 cost H.J.Eberl - MATH*2170 43
Example: y 4y = t+3cost+e 2t the characteristic polynomial is with roots r 1 = 0,r 2 = 2,r 3 = 2 r 3 4r = 0 a fundamental system of the homogeneous equation is y 1 (t) = 1, y 2 (t) = e 2t, y 3 (t) = e 2t we can find a particular solution as the sum of the particular solutions to the simpler problems y 4y = t, y 4y = 3cost, y 4y = e 2t out ansatz functions are (why??) Y 1 (t) = t(a 0 t+a 1 ), Y 2 (t) = Bcost+Csint, Y 3 (t) = Ete 2t H.J.Eberl - MATH*2170 44
Example: y 4y = t+3cost+e 2t cont d carrying out the calculations we find A 0 = 1 8, A 1 = 0, B = 0, C = 3 5, E = 1 8 and Y(t) = 1 8 t2 3 5 sint+ 1 8 te 2t H.J.Eberl - MATH*2170 45
A comment on Variation of the Constant (Boyce & DiPrima, sec 4.4) y (n) +p n 1 (t)y (n 1) +...p 1 (t)y +p 0 (t)y = g(t) this becomes very cumbersome for higher order equations we need to find first a fundamental system y 1 (t),...,y n (t) of the homogeneous equation... this is very challenging for equations with non-constant coefficients and not easy for equations with constant coefficients, as discussed already the ansatz is to find a solution of the form Y(t) = u 1 (t)y 1 (t)+...+u n (t)y n (t) expanding on the idea for 2nd order equations, this leads us to solve y 1 u 1 +...+y n u n = 0, y 1u 1 +...+y nu n = 0, y (n 1) 1 u 1 +...+y (n 1) n u n = 0, for u 1,...u n and then to find u 1,...,u n by integration. H.J.Eberl - MATH*2170 46