Math 4 Spring 08 problem set. (a) Consider these two first order equations. (I) d d = + d (II) d = Below are four direction fields. Match the differential equations above to their direction fields. Provide a justification for our choice. (In each field one tick mark is one unit, just in case ou can t see the tick labels!) Field B - - 0 0 - - Field A - - 0 0 - - Field D - - 0 0 - - Field C - - 0 0 - - Solution. The direction field associated to the differential equation = + should have 0 slope along the line =. This eliminates Fields C D. The slope field should be independent of the variable. This eliminates slope field B which depends on both. Therefore slope field A is the matching field. Solution continued. The direction field that matches the differential equation = must have 0 slope along the line =. The onl direction field with this characteristic is direction field C. Therefore we match equation (II) with direction field C.. Find the general solution to the differential equation + = sin.
Be sure to justif each step. Include in our solution the derivation of the integrating factor µ() for a first order linear equation. Solution. We use the method of integrating factors to solve the first order linear differential equation + = sin. To determine the proper integrating factor, we multipl this equation through b the unknown function µ() obtain µ() + µ() = sin µ(). We now add subtract the term µ () in order to obtain the proper form of the product rule for differentiating µ() obtain We group the terms on the left-h side as follows: µ() + µ () + µ() µ () = sin µ(). [ µ() + µ ()] + [ µ() µ ()] = sin µ(). Notice that the first term on the left-h side is the result of differentiating the product µ(). (µ()) + [ µ() µ ()] = sin µ() () If we can choose µ() so that the second term on the right-h side is zero, then solving the equation will be a matter of simpl anti-differentiating both sides of the equation solving for. To do this we must solve the equation µ() µ () = 0. If we collect the terms involving µ() on the left-h side we get µ () µ() =. The left-h side is just the derivative of the natural log of µ(): Anti-differentiating both sides of the equation we get: (ln µ()) =. ln µ() = ln + C. Where C is an arbitrar constant of integration. We solve for the unknown function µ() b eponentiating both sides of the equation e ln µ() = e ln +C. Since the natural log eponential functions are inverses of each other, the equation becomes With this choice of µ() equation () becomes µ() = e C. (e C ) + [ ec e C ] = (sin )(e C ). As planned, the second term on the left-h side is zero. The arbitrar constant e C divides out of both sides of the equation we re left with ( ) = (sin )( ).
Anti-differentiating both sides of the equation with respect to, we get = sin d. One integration b parts with u = dv = sin ields = cos + cos d. Another integration b parts with u = dv = cos ields = cos + sin sin d. Performing the final integration, we get = cos + sin + cos + C. Solving for we get the general solution = cos + sin + cos + C. 3. B wa of reviewing the previous problem, the general solution of the differential equation + = sin can be computed using the method of integrating factors. In proper form it becomes + = sin. From this form, we see that the function p() = sin g() =. The integrating factor is µ() = e d =. Multipling µ() through the above equation we get [ ] = sin. Integrating both sides with respect to we get, = sin d. After one integration b parts with u = dv = sin d we get Solving for we obtain the general solution = cos + sin + C. = sin cos + C. A direction field with the solutions, j () plotted on it, is given on the net page. 3
3-3 - - 0 0 3 - - -3 All of the solutions, j (), become infinite at = 0. For Large values of, the solutions appear to be at least bounded perhaps tending to 0. However, noting that sin() cos() lim 0 = 0 we see that the solution obtained b setting C = 0 can be well defined at = 0. 4. Consider the differential equation This differential equation has general solution + λ = 0. = c cos(λ) + c sin(λ). (a) Verif that is a solution of the differential equation. (b) Impose the conditions (0) = 0, (π) = 0 on the general solution. For what values of λ will these conditions be satisfied? (c) Impose the conditions (0) = (π) on the general solution. For what values of λ will this condition be satisfied? (d) Solutions. We will show that the function () = c cos(λ) + c sin(λ) is a solution to equation (). First we must compute the second derivative of (). The first derivative is The second derivative is () = c λ sin(λ) + c λ cos(λ). () = c λ cos(λ) c λ sin(λ). Plugging the function its second derivative into equation (), we obtain Therefore () is a solution to equation (). ( c λ cos(λ) c λ sin(λ) + λ (c cos(λ) + c sin(λ) = 0. 4
(e) We shall now impose the initial conditions (0) = (π) = 0. Doing this ields the sstem of two equations two unknowns (0) = 0 = c (π) = 0 = c sin(λπ). The first condition forces c = 0. Thus, the most general solution satisfing the condition (0) = 0 is () = c sin(λ). Imposing the second condition leads to the equation c sin(λπ) = 0. This equation is satisfied if c = 0 or if λ = n where n is an integer. Therefore equation () has non-zero solutions onl when λ is an integer. (f) Starting over again with the general solution, let s impose the conditions (0) = (π). Doing this ields the equation, (0) = c = c cos(λπ) + c sin(λπ). One possible solution (onl solution?) is to choose a value (or values) of λ so that cos(λπ) = sin(λπ) = 0. This can be done b choosing λ = n. Thus, non-zero solutions of eist when λ = n where n is an integer. These special values of λ are known as eigenvalues. Eigenvalues depend upon the conditions ou specif (as ou should see from parts (b) (c)). There are important phsical applications of eigenvalues in phsics mechanical engineering. 5. Theorem 0. Consider the initial value problem + p() + q() = g(), ( 0 ) = 0, ( 0 ) = 0, where p, q, g are continuous on an open interval I containing the point 0. Then there eists a unique solution of this problem on the entire interval I. Consider the specific initial value problem ( ) + =, (0) = 0, (0) =. We would like to appl Theorem to this eample. To do this we must first convert it into the form used in Theorem. + = We can now determine the functions p, q g in Theorem. p() =, q() = 0, g() = The function q() = 0 is continuous on the entire real line. However, the functions p() g() are undefined at the point =. p() g() are continuous on the set (, ) (, ). The functions p, q g are all continuous on the interval I = (, ) this interval contains the initial point 0 = 0, therefore Theorem guarantees us that we will have a unique solution to this initial value problem on the entire interval I. We will now verif this conclusion b solving the eample problem directl using the method of reduction of order (since there is no term appearing eplicitl in the eample). We make the substitution v = the differential equation transforms as follows: v + v =. 5
This is now a first order linear differential equation which we can solve using the method of integrating factors. The integrating factor in this case is µ() =. Multipling the equation through b µ() we obtain (v( )) =. Integrating both sides of the equation with respect to we get, Solving for v() we get v()( ) = + c. v() = + c. If we now undo the substitution v = we get another differential equation, but in the unknown function instead of v. = + c. We must integrate this function with respect to to obtain the solution (). + c () = d (b long [ division) = + + c ] d = + ( + c ) ln + c We now impose the initial conditions (0) = 0 (0) =. We get the following sstem of two equations two unknowns: (0) = 0 = c (0) = = c. This is eas to solve. We see that c = c = 0 which leads to the particular solution () =. We see that the conclusion of Theorem in regards to this eample is correct. We have a solution on the entire interval I.. Theorem 0. If are two solutions of the differential equation L[] = + p() + q() = 0 where p q are continuous on an open interval I, then the Wronskian of is given b [ ] W (, )() = c ep p() d, where c is a certain constant that depends on, but not on. Further, W (, )() is either zero for all in I (if c = 0) or else is never zero in I (if c 0). Proof: Assume that are solutions of the differential equation 6
L[] = 0. Then we have the following two equations, + p() + q() = 0, () + p() + q() = 0. (3) If we multipl equation () b equation (3) b we get p() q() = 0, + p() + q() = 0. If we now add these equations, we see that the q() terms will cancel we ll be left with the equation ( ) + p()( ) = 0. (4) The second term on the left-h side of this equation looks like a Wronskian of. Let W () = W (, )(), then W () = () () () (). Lets compute the derivative of W (). To do this we must appl the product rule to both terms on the right-h side of the above equation. This ields W () = ( () () + () ()) ( () () + () ()). Notice that the terms involving the first derivatives cancel we get the identit W () = () () () (). If we now make the substitutions of W () W () into equation (4), we get the following first order linear differential equation in the unknown function W (): W () + p()w () = 0. We can solve this using the method [ of integrating ] factors. Since it is alread in stard form, we see that our integrating factor µ() = ep p() d. Multipling through, we get ( [ ep ] p() d W ()) = 0. Integrating both sides of this with respect to we get the equation, [ ep ] p() d W () = c. Solving for the function W () we get that the Wronskian satisfies [ W (, )() = c ep ] p() d 7
where c is the unknown constant of integration. Since we ve shown that the Wronskian of is a constant times an eponential function. And, since we know that an eponential function is never zero, we see that if c 0 then the Wronskian is never zero ( thus the functions are linearl independent). Alternativel, if c = 0 then the Wronskian is alwas zero ( the functions are linearl dependent). Abel s formula provides a simple wa to compute the Wronskian of two solutions of L[] = 0 in terms of the coefficient function p(). It is interesting to note that the Wronskian is independent of q().. (a) Euler s Formula is given b the identit e i = cos() + i sin(). In order to prove this identit we shall use the Talor series epansion of the function e z. e z = We substitute z = i into this epression. This gives e i = n= z n n! 0 (i)n. n! In order to proceed from here, we must first notice a pattern in powers of the comple unit i. We have the following (i) n = ( ) n (i) n+ = i( ) n. (This shows a pattern in powers of i for even odd indices n.) We must break up the infinite series into two infinite series, one for even n one for odd n. e i = = ( (i) n (n)! (i) n (n)! + ) + (i)n+ (n + )! We ma now use our two identities for even odd powers of i, e i = = (i) n n (n)! ( ) n n (n)! + + i = cos() + i sin() (i) n+ (n + )!. (i) n+ n+ (n + )! ( ) n n+ (n + )! The last identit following from the specific forms of the Talor series for sine cosine. Thus, we have proven Euler s Formula. (b) We would now like to establish the identit To do this we note the definition of hperbolic cosine, cosh(i) = cos(). cosh() = e + e. 8
We now substitute i into this identit obtain, Now, appling Euler s Formula we get cosh(i) = ei + e i. cosh(i) = (cos() + i sin() + cos() i sin()) which is the desired identit. (c) We would now like to establish the identit = (cos() + cos()) = cos() To do this we note the definition of hperbolic sine, We now substitute i into this identit obtain, sinh(i) = i sin(). sinh() = e e. Now, appling Euler s Formula we get sinh(i) = ei e i. sinh(i) = (cos() + i sin() cos() + i sin()) = (i sin() + i sin()) = i sin() which is the desired identit. 9