6.5 Impulse Functions 33 which is the formal solution of the given problem. It is also possible to write y in the form 0, t < 5, y = 5 e (t 5/ sin 5 (t 5, t 5. ( The graph of Eq. ( is shown in Figure 6.5.3. Since the initial conditions at t = 0 are homogeneous and there is no eternal ecitation until t = 5, there is no response in the interval 0 < t < 5. The impulse at t = 5 produces a decaying oscillation that persists indefinitely. The response is continuous at t = 5 despite the singularity in the forcing function at that point. However, the first derivative of the solution has a jump discontinuity at t = 5, and the second derivative has an infinite discontinuity there. This is required by the differential equation (7, since a singularity on one side of the equation must be balanced by a corresponding singularity on the other side. In dealing with problems with impulsive forcing the use of the delta function usually simplifies the mathematical calculations, often quite significantly. However, if the actual ecitation etends over a short, but nonzero, time interval, then an error will be introduced by modeling the ecitation as taking place instantaneously. This error may be negligible, but in a practical problem it should not be dismissed without consideration. In Problem 6 you are asked to investigate this issue for a simple harmonic oscillator. PROBLEMS In each of Problems through : (a Find the solution of the given initial value problem. (b Draw a graph of the solution.. y + y + y = δ(t π; y(0 =, y (0 = 0. y + y = δ(t π δ(t π; y(0 = 0, y (0 = 0 3. y + 3y + y = δ(t 5 + u 0 (t; y(0 = 0, y (0 = /. y y = 0δ(t 3; y(0 =, y (0 = 0 5. y + y + 3y = sin t + δ(t 3π; y(0 = 0, y (0 = 0 6. y + y = δ(t π; y(0 = /, y (0 = 0 7. y + y = δ(t π cos t; y(0 = 0, y (0 = 8. y + y = δ(t π/; y(0 = 0, y (0 = 0 9. y + y = u π/ (t + 3δ(t 3π/ u π (t; y(0 = 0, y (0 = 0 0. y + y + y = δ(t π/6 sin t; y(0 = 0, y (0 = 0. y + y + y = cos t + δ(t π/; y(0 = 0, y (0 = 0. y ( y = δ(t ; y(0 = 0, y (0 = 0, y (0 = 0, y (0 = 0 3. Consider again the system in Eample of this section, in which an oscillation is ecited by a unit impulse at t = 5. Suppose that it is desired to bring the system to rest again after eactly one cycle that is, when the response first returns to equilibrium moving in the positive direction. (a Determine the impulse kδ(t t 0 that should be applied to the system in order to accomplish this objective. Note that k is the magnitude of the impulse and t 0 is the time of its application.
7. Introduction 359 = p (t + +p n (t n + g (t, = p (t + +p n (t n + g (t,. n = p n(t + +p nn (t n + g n (t. If each of the functions g (t,..., g n (t is zero for all t in the interval I, then the system ( is said to be homogeneous; otherwise, it is nonhomogeneous. Observe that the systems ( and ( are both linear. The system ( is nonhomogeneous unless F (t = F (t = 0, while the system ( is homogeneous. For the linear system (, the eistence and uniqueness theorem is simpler and also has a stronger conclusion. It is analogous to Theorems.. and 3... ( Theorem 7.. If the functions p, p,..., p nn, g,..., g n are continuous on an open interval I: α < t < β, then there eists a unique solution = φ (t,..., n = φ n (t of the system ( that also satisfies the initial conditions (3, where t 0 is any point in I, and 0,..., 0 n are any prescribed numbers. Moreover, the solution eists throughout the interval I. Note that, in contrast to the situation for a nonlinear system, the eistence and uniqueness of the solution of a linear system are guaranteed throughout the interval in which the hypotheses are satisfied. Furthermore, for a linear system the initial values 0,..., 0 n at t = t 0 are completely arbitrary, whereas in the nonlinear case the initial point must lie in the region R defined in Theorem 7... The rest of this chapter is devoted to systems of linear first order equations (nonlinear systems are included in the discussion in Chapters 8 and 9. Our presentation makes use of matri notation and assumes that you have some familiarity with the properties of matrices. The basic facts about matrices are summarized in Sections 7. and 7.3, and some more advanced material is reviewed as needed in later sections. PROBLEMS In each of Problems through transform the given equation into a system of first order equations.. u + 0.5u + u = 0. u + 0.5u + u = 3 sin t 3. t u + tu + (t 0.5u = 0. u ( u = 0 In each of Problems 5 and 6 transform the given initial value problem into an initial value problem for two first order equations. 5. u + 0.5u + u = cos 3t, u(0 =, u (0 = 6. u + p(tu + q(tu = g(t, u(0 = u 0, u (0 = u 0 7. Systems of first order equations can sometimes be transformed into a single equation of higher order. Consider the system = +, =. (a Solve the first equation for and substitute into the second equation, thereby obtaining a second order equation for. Solve this equation for and then determine also.
7. Review of Matrices 37 Many of the rules of elementary calculus etend easily to matri functions; in particular, d (CA = CdA, where C is a constant matri; (8 dt dt d da (A + B = + db dt dt dt ; (9 d db (AB = A dt dt + da B. (30 dt In Eqs. (8 and (30, care must be taken in each term to avoid interchanging the order of multiplication. The definitions epressed by Eqs. (6 and (7 also apply as special cases to vectors. PROBLEMS 0 3. If A = 3 and B = 5 0, find 3 6 (a A + B (c AB + i + i. If A = and B = 3 + i i ( i 3 i (b A B (d BA, find (a A B (b 3A + B (c AB (d BA 3 3. If A = 0 3 and B = 3, find 0 (a A T (c A T + B T 3 i + i. If A =, find i + 3i (b B T (d (A + B T (a A T (b A (c A 3 5. If A = and B = 3 3, 0 verify that (A + B = A + B. 0 0 6. If A = 3, B = 3 3, and C =, 0 3 0 0 verify that (a (ABC = A(BC (b (A + B + C = A + (B + C (c A(B + C = AB + AC
37 Chapter 7. Systems of First Order Linear Equations 7. Prove each of the following laws of matri algebra: (a A + B = B + A (b A + (B + C = (A + B + C (c α(a + B = αa + αb (d (α + βa = αa + βa (e A(BC = (ABC (f A(B + C = AB + AC + i 8. If = 3i and y =, find i 3 i (a T y (b y T y (c (, y (d (y, y i 9. If = i and y = 3 i, show that + i (a T y = y T (b (,y = (y, In each of Problems 0 through 9 either compute the inverse of the given matri, or else show that it is singular. 3 0.. 3 6 3. 5 3. 3 5 6 0. 8 5. 0 7 0 0 3 6. 0 7. 3 0 0 0 0 8. 0 0 0 0 9. 0 3 0 0. Prove that if there are two matrices B and C such that AB = I and AC = I, then B = C. This shows that a matri A can have only one inverse. e t e t e t e t e t 3e t. If A(t = e t e t e t and B(t = e t e t e t, find e t 3e t e t 3e t e t e t (a A + 3B (c da/dt (b AB (d 0 A(t dt
Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 373 In each of Problems through verify that the given vector satisfies the given differential equation. 3. =, = e t 3. = + 3 e t, = e t + 0 te t 6 0. =, = 8 e t + e t 0 In each of Problems 5 and 6 verify that the given matri satisfies the given differential equation. e 3t e t 5. =, (t = e 3t e t e t e 3t 6. = 3, (t = e t e t e 3t e t e t e 3t e t 7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors In this section we review some results from linear algebra that are important for the solution of systems of linear differential equations. Some of these results are easily proved and others are not; since we are interested simply in summarizing some useful information in compact form, we give no indication of proofs in either case. All the results in this section depend on some basic facts about the solution of systems of linear algebraic equations. Systems of Linear Algebraic Equations. A set of n simultaneous linear algebraic equations in n variables a + a + +a n n = b, a n + a n + +a nn n = b n. ( can be written as A = b, (
Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 383 ( = 0, (3 = as the eigenvectors associated with the eigenvalue λ =. These eigenvectors are orthogonal to each other as well as to the eigenvector ( that corresponds to the eigenvalue λ =. PROBLEMS In each of Problems through 6 either solve the given system of equations, or else show that there is no solution.. 3 = 0 3 + + 3 = + + 3 =. + 3 = + + 3 = + 3 = 3. + 3 = + + 3 = + 3 = 5. 3 = 0 3 + + 3 = 0 + + 3 = 0. + 3 = 0 + + 3 = 0 + 3 = 0 6. + 3 = + 3 = + 3 = In each of Problems 7 through determine whether the members of the given set of vectors are linearly independent. If they are linearly dependent, find a linear relation among them. The vectors are written as row vectors to save space but may be considered as column vectors; that is, the transposes of the given vectors may be used instead of the vectors themselves. 7. ( = (,, 0, ( = (0,,, (3 = (, 0, 8. ( = (,, 0, ( = (0,, 0, (3 = (,, 0 9. ( = (,,, 3, ( = (, 0, 3,, (3 = (,,, 0, ( = ( 3, 0,, 3 0. ( = (,,, 0, ( = (, 3,,, (3 = (, 0,,, ( = (3,,, 3. ( = (,,, ( = (3,, 0, (3 = (,,, ( = (, 3,. Suppose that each of the vectors (,..., (m has n components, where n < m. Show that (,..., (m are linearly dependent. In each of Problems 3 and determine whether the members of the given set of vectors are linearly independent for < t <. If they are linearly dependent, find the linear relation among them. As in Problems 7 through, the vectors are written as row vectors to save space. 3. ( (t = (e t,e t, ( (t = (e t, e t, (3 (t = (3e t,0. ( (t = ( sin t, sin t, ( (t = (sin t, sin t 5. Let e t ( (t =, ( (t = te t. t Show that ( (t and ( (t are linearly dependent at each point in the interval 0 t. Nevertheless, show that ( (t and ( (t are linearly independent on 0 t.
38 Chapter 7. Systems of First Order Linear Equations In each of Problems 6 through 5 find all eigenvalues and eigenvectors of the given matri. 6. 5 3 7. 3 8. 9. ( i i 0. ( 3 3. ( 3 3/ 5.. 0 0 3. 3 /9 /9 8/9 /9 /9 0/9 5. 8/9 0/9 5/9 3 3 0 3 Problems 6 through 30 deal with the problem of solving A = b when det A = 0. 6. (a Suppose that A is a real-valued n n matri. Show that (A, y = (, A T y for any vectors and y. Hint: You may find it simpler to consider first the case n = ; then etend the result to an arbitrary value of n. (b If A is not necessarily real, show that (A, y = (, A y for any vectors and y. (c If A is Hermitian, show that (A, y = (, Ay for any vectors and y. 7. Suppose that, for a given matri A, there is a nonzero vector such that A = 0. Show that there is also a nonzero vector y such that A y = 0. 8. Suppose that det A = 0 and that A = b has solutions. Show that (b, y = 0, where y is any solution of A y = 0. Verify that this statement is true for the set of equations in Eample. Hint: Use the result of Problem 6(b. 9. Suppose that det A = 0 and that = (0 is a solution of A = b. Show that if ξ is a solution of Aξ = 0 and α is any constant, then = (0 + αξ is also a solution of A = b. 30. Suppose that det A = 0 and that y is a solution of A y = 0. Show that if (b, y = 0 for every such y, then A = b has solutions. Note that this is the converse of Problem 8; the form of the solution is given by Problem 9. Hint: What does the relation A y = 0 say about the rows of A? Again, it may be helpful to consider the case n = first. 3. Prove that λ = 0 is an eigenvalue of A if and only if A is singular. 3. In this problem we show that the eigenvalues of a Hermitian matri A are real. Let be an eigenvector corresponding to the eigenvalue λ. (a Show that (A, = (, A. Hint: See Problem 6(c. (b Show that λ(, = λ(,. Hint: Recall that A = λ. (c Show that λ = λ; that is, the eigenvalue λ is real.
398 Chapter 7. Systems of First Order Linear Equations different. Of course, the solutions arising from comple eigenvalues are complevalued. However, as in Section 3.3, it is possible to obtain a full set of real-valued solutions. This is discussed in Section 7.6. More serious difficulties can occur if an eigenvalue is repeated. In this event the number of corresponding linearly independent eigenvectors may be smaller than the algebraic multiplicity of the eigenvalue. If so, the number of linearly independent solutions of the form ξe rt will be smaller than n. To construct a fundamental set of solutions, it is then necessary to seek additional solutions of another form. The situation is somewhat analogous to that for an nth order linear equation with constant coefficients; a repeated root of the characteristic equation gave rise to solutions of the form e rt, te rt, t e rt,...the case of repeated eigenvalues is treated in Section 7.8. Finally, if A is comple, then comple eigenvalues need not occur in conjugate pairs, and the eigenvectors are normally comple-valued even though the associated eigenvalue may be real. The solutions of the differential equation ( are still of the form (3, provided that the eigenvalues are distinct, but in general all the solutions are comple-valued. PROBLEMS In each of Problems through 6: (a Find the general solution of the given system of equations and describe the behavior of the solution as t. (b Draw a direction field and plot a few trajectories of the system. 3. =. = 3 3. =. = 5. = 3 ( 6. = In each of Problems 7 and 8: (a Find the general solution of the given system of equations. (b Draw a direction field and a few of the trajectories. In each of these problems the coefficient matri has a zero eigenvalue. As a result, the pattern of trajectories is different from those in the eamples in the tet. 3 3 6 7. = 8. = 8 6 In each of Problems 9 through find the general solution of the given system of equations. i + i 9. = 0. = i i 3. =. = 0 3 ( 5 3 3 5
7.6 Comple Eigenvalues 09 y 5 0 5 t (a y 3 y 6 (b FIGURE 7.6.5 A solution of the system ( satisfying the initial condition y(0 = (,,,. (a A plot of y versus t. (b The projection of the trajectory in the y y 3 -plane. As stated in the tet, the actual trajectory in four dimensions does not intersect itself. PROBLEMS In each of Problems through 6: (a Epress the general solution of the given system of equations in terms of real-valued functions. (b Also draw a direction field, sketch a few of the trajectories, and describe the behavior of the solutions as t. 3. =. = 5 5 3. =. = 9 5 5. = 6. = 5 3 5