MA Ordinary Differential Equations and Control Semester 1 Lecture Notes on ODEs and Laplace Transform (Parts I and II)

Similar documents
Math Ordinary Differential Equations

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

f(t)e st dt. (4.1) Note that the integral defining the Laplace transform converges for s s 0 provided f(t) Ke s 0t for some constant K.

7 Planar systems of linear ODE

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto

e st f (t) dt = e st tf(t) dt = L {t f(t)} s

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

21 Linear State-Space Representations

Differential equations

Applied Differential Equation. November 30, 2012

Laplace Transform. Chapter 4

The Laplace transform

2nd-Order Linear Equations

Study guide - Math 220

Generalized Eigenvectors and Jordan Form

Theory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

ODEs Cathal Ormond 1

6 Linear Equation. 6.1 Equation with constant coefficients

INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

(f g)(t) = Example 4.5.1: Find f g the convolution of the functions f(t) = e t and g(t) = sin(t). Solution: The definition of convolution is,

Introduction to Modern Control MT 2016

A sufficient condition for the existence of the Fourier transform of f : R C is. f(t) dt <. f(t) = 0 otherwise. dt =

Applied Differential Equation. October 22, 2012

2.161 Signal Processing: Continuous and Discrete Fall 2008

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients

Systems Analysis and Control

Modeling and Simulation with ODE for MSE

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Control Systems. Frequency domain analysis. L. Lanari

Laplace Transforms Chapter 3

Name: Solutions Final Exam

APPM 2360: Midterm exam 3 April 19, 2017

3.5 Undetermined Coefficients

MA 266 Review Topics - Exam # 2 (updated)

Systems of Linear Differential Equations Chapter 7

Math 266 Midterm Exam 2

MA2327, Problem set #3 (Practice problems with solutions)

Solutions to Math 53 Math 53 Practice Final

The Laplace Transform

NOTES ON LINEAR ODES

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

ORDINARY DIFFERENTIAL EQUATIONS

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Jordan normal form notes (version date: 11/21/07)

4. Linear transformations as a vector space 17

REVIEW NOTES FOR MATH 266

1 Systems of Differential Equations

Math 215/255 Final Exam (Dec 2005)

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

Mathematics for Engineers II. lectures. Differential Equations

Section 8.2 : Homogeneous Linear Systems

Discrete and continuous dynamic systems

we get y 2 5y = x + e x + C: From the initial condition y(0) = 1, we get 1 5 = 0+1+C; so that C = 5. Completing the square to solve y 2 5y = x + e x 5

Identification Methods for Structural Systems

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Math 216 Second Midterm 16 November, 2017

Lecture 4: Numerical solution of ordinary differential equations

Linear Differential Equations. Problems

Solution of Linear State-space Systems

(an improper integral)

Exam Basics. midterm. 1 There will be 9 questions. 2 The first 3 are on pre-midterm material. 3 The next 1 is a mix of old and new material.

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

MA 527 first midterm review problems Hopefully final version as of October 2nd

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Math 3313: Differential Equations Laplace transforms

Math Matrix Algebra

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1

Control Systems. Laplace domain analysis

Ordinary Differential Equations. Session 7

Ordinary Differential Equation Theory

A Second Course in Elementary Differential Equations

ORDINARY DIFFERENTIAL EQUATIONS

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

9.5 The Transfer Function

Practice Problems For Test 3

A brief introduction to ordinary differential equations

Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals

Modal Decomposition and the Time-Domain Response of Linear Systems 1

Lecture 29. Convolution Integrals and Their Applications

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Chapter 6: The Laplace Transform 6.3 Step Functions and

Solution via Laplace transform and matrix exponential

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

ENGIN 211, Engineering Math. Laplace Transforms

Practice Problems For Test 3

Numerical Linear Algebra Homework Assignment - Week 2

2.161 Signal Processing: Continuous and Discrete Fall 2008

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland

Math 308 Exam II Practice Problems

Nonlinear Systems and Elements of Control

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland

Notes on the matrix exponential

Jordan Canonical Form

Computing inverse Laplace Transforms.

Transcription:

MA222 - Ordinary Differential Equations and Control Semester Lecture Notes on ODEs and Laplace Transform (Parts I and II Dr J D Evans October 22 Special thanks (in alphabetical order are due to Kenneth Deeley, Martin Killian, Prof E P Ryan and Prof V P Smyshlaev, all of whom have contributed significantly to these notes

Contents Revision 2 Systems of Linear Autonomous Ordinary Differential Equations 3 Writing linear ODEs as First-order systems 3 2 Autonomous Homogeneous Systems 4 3 Linearly Independent Solutions 5 4 Fundamental Matrices 2 4 Initial Value Problems Revisited 4 5 Non-Homogeneous Systems 6 2 Laplace Transform 9 2 Definition and Basic Properties 9 22 Solving Differential Equations with the Laplace Transform 24 23 The convolution integral 25 24 The Dirac Delta function 29 25 Final value theorem 3

Revision In science and engineering, mathematical models often lead to an equation that contains an unknown function together with some of its derivatives Such an equation is called a differential equation (DE We will use following notation for derivatives of a function x of a scalar argument t: Examples ẋ := dx dt = x (t, ẍ := d2 x dt 2 = x (t, etc Free fall: consider free fall for a body of mass m > the equation of motion is m d2 h dt 2 = mg, where h(t is the height at time t, and mg is the force due to gravity To find the solutions of this problem, rewrite the equation as ḧ(t = g and integrate twice to obtain ḣ(t = gt + c and h(t = 2 gt2 + c t + c 2, for two constants c, c 2 R An interpretation of the constants of integration c and c 2 is as follows: at t =, the height is h( = c 2, so c 2 is the initial height, and similarly ḣ( = c is the initial velocity 2 Radioactive decay: the rate of decay is proportional to the amount x(t of radioactive material present at time t: ẋ(t = k x(t, for some constant k > The solution of this equation is x(t = Ce kt The initial amount of radioactive substance at time t = is x( = C Remark Note that the solutions to differential equations are not unique; for each choice of c and c 2 in the first example there is a solution of the form h(t = ( /2gt 2 + c t + c 2 Likewise, for each choice of constant C in x(t = Ce kt there exists a solution to the problem ẋ(t = kx(t Those integration constants are often found from initial conditions (IC We hence often deal with initial value problems (IVPs, which consist of a differential equation together with some initial conditions Example Solve the initial value problem Solution Rewriting the differential equation gives ẏ(t + ay(t =, y( = 2 ẏ(t y(t = a Since d dt (ln y(t = ẏ(t/y(t, it follows that ln y(t = ẏ(t/y(tdt; hence ( ẏ(t y(t = exp y(t dt = y(t = e adt Hence, y(t = e at+c = Ce at (C := exp(c The initial condition yields y( = 2 C = 2 Therefore, the solution of the initial value problem is the function y : R R given by y(t = 2e at, t R 2

Chapter Systems of Linear Autonomous Ordinary Differential Equations Writing linear ODEs as First-order systems Any linear ordinary differential equation, of any order, can be written in terms of a first order system This is best illustrated by an example Example Consider the equation ẍ(t + 2ẋ(t + 3x(t = t ( ( ẋ In this case, set X =, ie X(t is a two-dimensional vector-valued function of t Then x Ẋ = ( ẋ ẍ ( = ẋ 3x 2ẋ + t ( = 3 2 ( ẋ x ( + t (2 So Ẋ = AX + g, where A = ( 3 2 (, g(t = t The system (2 and the equation ( are equivalent in the sense that x(t solves the equation if and only if X solves the system (For more examples for the above reduction see Problem Sheet, QQ 3 This motivates studying first order ODE systems The most general first order system is of the form B(tẋ(t + C(tx(t = f(t, where B, C : R C n n, f, x : R C n Henceforth, we do not employ any special notation for vectors and vector-value functions (like underlining or using bold cases, to simplify the notation, and on the assumption that what is a vector will be clear from the context If B (t exists for all t R, the equation may be rewritten to obtain: ẋ(t = B (tc(tx(t + B (tf(t or where A(t = B (tc(t and g(t = B (tf(t ẋ(t = A(tx(t + g(t, (3 3

Definition Equation (3 (in fact, a system of equations is called the standard form of a first order system of ordinary differential equations 2 If A(t and g(t do not depend on t, the system is called autonomous 3 If g, the system (3 is called homogeneous, otherwise (g it is called inhomogeneous 2 Autonomous Homogeneous Systems We consider initial-value problems for autonomous homogeneous systems, ie we assume A is a constant, generally complex-valued, n n matrix Theorem (Existence and Uniqueness of IVPs Let A C n n and x C n Then the initial value problem has a unique solution ẋ(t = Ax(t, x(t = x (4 Proof (Sketch We first argue that a solution to (4 exists and is given by x(t = exp ((t t A x, where exp ((t t A is an appropriately understood matrix exponential Definition (Matrix Exponential For a matrix Y C n n, the function exp( Y : R C n n is defined by exp(ty = k= k! (ty k = I + ty + 2 t2 Y 2 +, t R (5 (We adopt the conventions! =, and Y = I where I is the unit matrix Remarks The following hold true Fact : The series (5 converges for all t R (meaning, the series of matrices converges for each component of the matrix; Fact 2: The derivative satisfies We define x(t = exp((t t Ax Then d (exp(ty = Y exp(ty = exp(ty Y dt x(t = exp((t t Ax = exp(ax = Ix = x Hence x(t = x Also, ẋ(t = A exp((t t Ax = Ax(t, so x(t is a solution To establish uniqueness, let x(t and y(t be two solutions of (4 and consider h(t := x(t y(t Then ḣ = ẋ ẏ = Ax Ay = A(x y = Ah and hence ḣ = Ah and h(t = x(t y(t = We will show that h(t It suffices to show that h(t, where denotes the length of a vector Then t h(t = Ah(sds t Ah(s ds A h(s ds, (6 t t t ( where A := max x Ax x, and we have used the fact that the length of an integral of a vectorfunction is less or equal the integral of a length The function f(t := exp( t t A A h(s ds t satisfies f(t =, and f(t t On the other hand, evaluating f and using (6 we conclude: f (t for t t and f (t for t t [For example for t t, [ ] f (t = exp( (t t A A A h(s ds + h(t t by (6] Taken together this implies that f(t, hence h(t, implying h(t as required 4

To compute exp(ta explicitly is generally not so easy (This will be discussed in more detail later We will aim at constructing appropriate number of linearly independent solutions of the system ẋ = Ax Hence first Definition (Linear Independence of Functions Let I R be an interval The vector functions y i : I C n, i =,, m are said to be linearly independent on I if m c i y i (t =, t I implies c = c 2 = = c m = i= The functions y i are said to be linearly dependent if they are not linearly independent, ie if c, c 2,, c m not all zero such that m c i y i (t =, t I ( e t Example Let I = [, ] and y (t = te t Claim y and y 2 are linearly independent i=, y 2 (t = ( t Proof Assume they are not; then c, c 2 not both zero such that c y (t + c 2 y 2 (t =, t I { c e t + c 2 = c te t + c 2 t =, t I If c 2 is non-zero then clearly c is non-zero too (and the other way round, and hence in particular e t = c 2 /c is constant which yields a contradiction Thus y and y 2 are linearly independent For more examples see Sheet QQ 4 5 3 Linearly Independent Solutions Consider the homogeneous autonomous system ẋ = Ax, (7 where A C n n and x = x(t : R C n We prove that there exists n and only n linearly independent solutions of (7 Theorem (Existence of n and no more than n Linearly Independent Solutions There exist n linearly independent solutions of system (7 Any other solution can be written as x(t = n c i x i (t, c i C (8 i= Proof Let v,, v n C n be n linearly independent vectors Let x i (t be the unique solutions of the initial value problems ẋ i = Ax i, x(t = v i, i =,, n Now the functions x i (t are linearly independent (see Problem 4 (a Sheet, so the existence of n linearly independent solutions is assured Now let x(t be an arbitrary solution of (7, with x(t = x C n Since the v i, i =,, n, are linearly independent, they form a basis for C n, so in particular c,, c n C such that n x = c i v i i= 5

Now define Then, Further, ẏ(t = y(t = n c i ẋ i (t = i= y(t = n c i x i (t, y : R C n i= n n c i Ax i (t = A c i x i (t = Ay(t i= n c i x i (t = i= i= n c i v i = x Hence, by uniqueness of solution to the initial value problem ẏ = Ay with y(t = x, it follows that y(t = x(t A Method for Determining the Solutions Some linearly independent solutions are found by seeking solutions of the form where v C n is a non-zero constant vector In this case, i= x(t = e λt v, (9 ẋ(t = λe λt v, so to satisfy ẋ = Ax, it is required that λe λt v = Ae λt v Av = λv, v Hence x(t = e λt v is a solution of ẋ = Ax if and only if λ is an eigenvalue of A with v C n being corresponding eigenvector Example Consider ẋ(t = ( 4 The eigenvalues of A are found by solving: det(a λi = det ( x(t, hence A = 4 ( λ 4 λ = ( λ 2 4 = λ 2 2λ 3 = (λ 3(λ + = λ = 3, λ 2 = The corresponding eigenvectors (denoting v and v 2 the components of the vector v λ = 3: ( ( 2 v Av = 3v (A 3Iv = 4 2 v 2 = { 2v +v 2 = 4v 2v 2 = ( can take v = v :=, 2 as an eigenvector corresponding to λ = 3 λ = : ( ( 2 v (A + Iv = 4 2 v 2 = { 2v + v 2 = 4v + 2v 2 = ( take v 2 = 2 ( 6

So, in summary, x (t = e 3t ( 2, x 2 (t = e t ( 2 are solutions to ẋ = Ax Further, since v and v 2 are linearly independent vectors, the solutions x (t and x 2 (t are linearly independent Thus the general solution to ( is ( ( x(t = c x (t + c 2 x 2 (t x(t = c e 3t + c 2 2 e t, 2 where c, c 2 C are arbitrary constants Corollary Let A C n n If A has n linearly independent eigenvectors, then the general solution of ẋ = Ax is given by n x(t = c i e λit v i, i= where v i are the linearly independent eigenvectors with corresponding eigenvalues λ i, and c i C are arbitrary constants Definition (Notation For A C n n, the characteristic polynomial will be denoted by π A (λ := det(a λi The set of eigenvalues of A (the spectrum of A is denoted by Spec(A = {λ C π A (λ = } As is known from linear algebra, for a matrix A to have n linearly independent eigenvectors (equivalently, for A to be diagonalisable, it would suffice eg if A had n distinct eigenvalues or were symmetric However, a matrix may have less than n linearly independent eigenvectors as the following example illustrates Example An example of a 2 2 matrix which does not have two linearly independent eigenvectors Let ( 2 A =, hence π 2 A (λ = (2 λ 2 = λ = 2, an eigenvalue with algebraic multiplicity 2 A corresponding eigenvector is found by solving ( ( v v 2 = In the latter one can choose only one linearly independent eigenvector eg v = x (t = e 2t ( ( Hence solves ẋ = Ax But the second linearly independent solution still remains to be found Finding the Second Linearly Independent Solution Try x 2 (t = u(te λt, for some function u : R C 2 Substituting gives ẋ 2 (t = u(te λt + λu(te λt = Ax 2 (t u(te λt + λu(te λt = Au(te λt u(t + λu(t = Au(t u(t = (A λiu(t (* 7

Now try u(t = a + t b, with a, b C 2 constant vectors Then u(t = b, so substituting into (* gives Comparing coefficients of and t yields b = (A λi(a + t b = (A λia + t(a λib (A λia = b, (A λib = ( Thus one can choose b as the eigenvector already found: b = This enables the equation to be solved for a: This gives Further, x ( = ( ( ( ( ( a (A λia = a 2 = ( with eg a = (( ( x 2 (t = + t e 2t (, x 2 ( =, so x (t and x 2 (t are linearly independent solutions of ẋ(t = ( 2 2 x(t The general solution is therefore given by x(t = c x (t + c 2 x 2 (t = c e 2t ( + c 2 e 2t ( t, where c, c 2 C are arbitrary constants We shall now consider the case when an n n matrix has less than n linearly independent eigenvectors in greater generality Notice that in the above example (A λib =, and (A λia = b while (A λi 2 a = (A λib = Hence while b is an ordinary eigenvector, a may be viewed as a generalised eigenvector This motivates the following generic Definition (Generalised Eigenvectors Let A C n n, λ Spec(A Then a vector v C n is called a generalised eigenvector of order m N, with respect to λ, if the following two conditions hold: ˆ (A λi k v, k m ; ˆ (A λi m v = In the above example, a is a generalised eigenvector of A with respect to λ = 2 of order m = 2 Notice that, in the light of the above definition, the ordinary eigenvectors are generalised eigenvectors of order Following lemma gives a way of constructing a sequence of generalised eigenvectors as long as we have one, of order m 2: Lemma (Constructing Generalised Eigenvectors Let λ Spec(A and v be a generalised eigenvector of order m 2 Then, for k =,, m, the vector is a generalised eigenvector of order m k Proof It is required to show that v m k := (A λi k v 8

ˆ (A λi l v m k, l m k ; ˆ (A λi m k v m k = Firstly, whence Moreover, (A λi l v m k = (A λi l (A λi k v = (A λi l+k v, l + k m (A λi l v m k, l m k (A λi m k v m k = (A λi m k (A λi k v = (A λi m v = The need for incorporating the generalised eigenvectors arises as long as there are not enough ordinary eigenvectors, more precisely when the geometric multiplicity is different (ie strictly less than the algebraic multiplicity, defined as follows Definition (Algebraic and Geometric Multiplicity Let A C n n and λ Spec(A Then λ has geometric multiplicity m N if m is the largest number for which m linearly independent eigenvectors exist If m =, λ is said to be a simple eigenvalue The algebraic multiplicity of λ is the multiplicity of λ as a root of π A (λ Examples (i Consider A = ( 2 2 Hence Spec(A = {2} Then, by an earlier example, λ = 2 has geometric multiplicity, but algebraic multiplicity is 2 (ii Consider the matrix Then π A (λ = det A = 2 λ λ 2 λ = ( λ 2 (2 λ Hence Spec(A = {, 2} and λ = has algebraic multiplicity 2, λ = 2 has algebraic multiplicity For the eigenvectors, v λ = : (A Iv = v 2 = v 3 v =, v 2 = are two linearly independent eigenvectors, so λ = has geometric multiplicity 2 Further, λ = 2: (A 2Iv = is an eigenvector Hence λ = 2 is a simple eigenvalue v v 2 v 3 =, v 3 = The following theorem provides a recipe for constructing m linearly independent solutions as long as we have a generalised eigenvector of order m 2 9

Theorem Let A C n n and v C n be a generalised eigenvector of order m, with respect to λ Spec(A Define the following m vectors: v = (A λi m v v 2 = (A λi m 2 v Then: v m = (A λiv v m = v ˆ ˆ The vectors v, v 2,, v m, v m are linearly independent; The functions k x k (t = e λt t i i! v k i, k {,, m} i= form a set of linearly independent solutions of ẋ = Ax Proof ˆ Seeking a contradiction, assume that the vectors are linearly dependent, so c,, c m not all zero such that m c i v i = i= Then there exists the last non-zero c j : c j and either j = m or c i = for any j < i m Hence j c k v k = k= j c k (A λi m k v = k= Hence, pre-multiplying by (A λi j gives j (A λi j c k (A λi m k v = k= j c k (A λi m+j k v = k= Now for k =,, j it follows that m + j k m, so (A λi m+j k v = by definition of the generalised eigenvector For k = j, it follows that m + j k = m, so (A λi m v ; this implies that c j = which is a contradiction Hence the vectors are in fact linearly independent, as required ˆ Since v,, v m are linearly independent and x k ( = v k, it follows that x (t,, x m (t are linearly independent functions (cf Sheet Q 4(a It remains to show that ẋ k (t = Ax k (t, k m Indeed, Now for 2 j m, k ẋ k (t = λe λt i= t i k 2 i! v k i + e λt i= t i i! v k i k = e [λ λt t i k 2 i! v t i k i + i! v k i i= i= = e λt [ t k (k! λv + k 2 i= ] t i i! (λv k i + v k i v j = (A λi m j+ v = (A λi(a λi m j v = (A λiv j = Av j λv j ]

Thus λv j + v j = Av j, and hence (having also used λv = Av so ẋ k (t = Ax k (t as required k ẋ k (t = e λt i= t i k i! Av k i = Ae λt i= t i i! v k i = Ax k (t, Examples Consider the matrix For the eigenvalue λ =, A = (A Iv = 3, 2 π A (λ = ( λ 2 (3 λ v v 2 v 3 = v = is an eigenvector with respect to λ = (There is only one linearly independent eigenvector, ie the geometric multiplicity is while the algebraic multiplicity is 2 Hence seek v 2 such that (A Iv 2 and (A I 2 v 2 = Notice that (A I 2 = 2 4, 2 so take v 2 = 2 as a generalised eigenvector of order 2 Then set v := (A Iv 2 = 2 (notice v is an ordinary eigenvector, as expected For the eigenvalue λ = 3, so take v 3 = Hence finally set and 2 x (t = e t v = (A 3Iv 3 = e t The general solution to ẋ = Ax is hence: where c, c 2, c 3 C are arbitrary constants 2 2 2 = v 3 =,, x 2 (t = e t (v 2 + tv = x 3 (t = e 3t v 3 = 2e 3t e 3t x(t = c x (t + c 2 x 2 (t + c 3 x 3 (t, 2et e t te t,

2 Find three linearly independent solutions of In this case, and since ẋ(t = Ax(t = π A (λ = ( λ 3, (A I = there is only one (linearly independent eigenvector, v = x(t, Hence λ = has algebraic multiplicity 3 and geometric multiplicity one Hence a generalised eigenvector v 3 of order three is required, ie such that: (A Iv 3 = v 3 (A I 2 v 3 = and (A I 3 v 3 = [] 3 3 v 3 = We can choose as such v 3 = v 2 := (A Iv 3 = v 3, v := (A I 2 v 3 = (A Iv 2 = Hence, using the main theorem on the existence of solutions, x (t = e t v are three linearly independent solutions 4 Fundamental Matrices x 2 (t = e t (v 2 + t v x 3 (t = e t (v 3 + t v 2 + t2 2 v Then set: Assume that for A C n n n linearly independent solutions x (t,, x n (t of the system ẋ(t = Ax(t have been found Such n vector-functions constitute a fundamental system Given such a fundamental system, there exist constants c i C such that any other solution x(t can be written as x(t = n i= c ix i (t Given a fundamental system we can define fundamental matrix as follows 2

Definition (Fundamental Matrix A fundamental matrix for the system ẋ(t = Ax(t is defined by ( Φ(t = x (t x 2 (t x n (t that is, Φ : R C n n, where the ith column is x i : R C n, and x i, i =,, n, are n linearly independent solutions (ie a fundamental system Then Φ(t = (ẋ (t ẋ n (t = (Ax (t Ax n (t = A (x (t x n (t = AΦ(t Also note that Φ (t exists for all t R, since the x i (t are linearly independent Example For A = ( 4 two linearly independent solutions of the system ẋ(t = Ax(t are (see Example in 3 above ( ( x (t = e 3t, x 2 2 (t = e t 2 The functions x (t, x 2 (t constitute a fundamental system, and the matrix ( e 3t e Φ(t = t 2e 3t 2e t is a fundamental matrix for ẋ(t = Ax(t Lemma If Φ(t and Ψ(t are two fundamental matrices for ẋ(t = Ax(t, then C C n n (a constant matrix such that Φ(t = Ψ(tC, t R Proof Write Φ(t = (x (t x n (t and Ψ(t = (y (t y n (t Since y (t,, y n (t constitute a fundamental system, each x j (t can be written in terms of y (t,, y n (t, so there exist constants c ij C such that n x j (t = c ij y i (t, j n i= The above vector identity, for the k-th components, k =,, n, reads Φ kj (t = n i= c ijψ ki (t This is equivalent to Φ(t = Ψ(tC, with C = [c ij ] n n, the matrix with (i, jth entry c ij We show next that the matrix exponential is a fundamental matrix Theorem The matrix function Φ(t = exp(ta is a fundamental matrix for the system ẋ(t = Ax(t Proof d dt (exp(ta = d dt e = k= k! (tak = k=, k! tk A k+ = A k=, k! (tak = A exp(ta Hence Φ(t = AΦ(t Write Φ(t = (x (t x n (t, where x i (t is the ith column of Φ For it follows that x i (t = Φ(te i and thus, e 2 =,, e n = ẋ i (t = Φ(te i = AΦ(te i = Ax i (t Hence ẋ i (t = Ax i (t and x (t,, x n (t are linearly independent functions because x ( = e,, x n ( = e n are linearly independent vectors Therefore, Φ(t = exp(ta is a fundamental matrix, by definition 3

4 Initial Value Problems Revisited Let A C n n, and seek Φ : R C n n such that Φ(t = AΦ(t and Φ(t = Φ, for some initial condition Φ C n n If Ψ(t is a fundamental matrix and Ψ(t = Ψ, set Φ(t = Ψ(tΨ Φ Claim Φ(t solves the above initial value problem Proof ˆ Firstly, ˆ Moreover, Φ(t = so the initial condition is satisfied Ψ(tΨ Φ = AΨ(tΨ Φ = AΦ(t Φ(t = Ψ(t Ψ Φ = Ψ Ψ Φ = Φ, Corollary The unique solution to the above initial value problem is given by Proof: Exercise Example Solve the initial value problem Φ(t = exp((t t AΦ Φ(t = ( 4 Φ(t, Φ( = I Solution Firstly, (see the example above ( e 3t e Ψ(t = t 2e 3t 2e t is a fundamental matrix Then, using the result from above, = ( Φ(t = Ψ(tΨ e 3t e = t 2e 3t 2e t ( /2 /4 /2 /4 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t Corollary Let Φ(t be a fundamental matrix of ẋ = Ax Then exp(ta = Φ(tΦ ( Proof Since exp(ta is a fundamental matrix, exp(ta = Φ(tC for some constant matrix C C n n At t =, I = Φ(C, so C = Φ ( (recall that Φ(t is invertible for all t R Hence determining a fundamental matrix essentially determines exp(ta Example From above, the system has a fundamental matrix given by Moreover Ψ(tΨ ( is given by Φ(t = ( 4 Φ(t ( e 3t e Ψ(t = t 2e t 2e 3t 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t 4 = exp (t ( 4

An alternative ( view (computing the matrix exponential via diagonalisation: With A = 4, it follows that λ = 3 and λ 2 = are eigenvalues with corresponding eigenvectors v = ( 2 (, v 2 = 2 Write Av = λ v and Av 2 = λ 2 v 2 into one matrix equation: ( λ A (v v 2 = (v v 2 AT = T D, λ 2 for matrices T and D as defined above Notice that D is diagonal and T is invertible since its columns are linearly independent, so A = T DT and T AT = D Generally, given A C n n and T C n n invertible such that then T AT = exp(ta = exp(tt DT = λ λ n k= Returning to the example and taking the above into account, ( ( exp t 4 = 4 ( 2 2, k! tk ( T DT k = k! tk T D k T k= ( = T k! tk D k k= T = T exp(tdt e λt = T T e λnt ( e 3t ( 2 e t 2 = 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t The above shows how exp(ta can be computed when A is diagonalisable (equivalently, has n linearly independent eigenvectors, taking which as a basis in fact diagonalises the matrix More generally, when A is not necessarily diagonalisable, we can still present it in some standard form (called Jordan s normal form by incorporating the generalised eigenvectors By Corollary, the columns of a fundamental matrix are of the form e At v, for some v C n Now [ ] e At v = e λti e (A λit v = e λt k! (A λik t k v Thus, to simplify any calculation, seek λ C and v C n such that (A λi k v = for all k m, for some k N; in other words, seek generalised eigenvectors The following fundamental Theorem from linear algebra ensures that there always exist n linearly independent generalised eigenvectors (Hence, we are always able to construct n linearly independent solutions in terms of those, via the earlier developed recipes k= 5

Theorem (Primary Decomposition Theorem Let A C n n, and let π A (λ = l (λ j λ mj j= with the λ j distinct eigenvalues of A and l j= m j = n Then, for each j =,, l, m j linearly independent (generalised eigenvectors with respect to λ j, and the combined set of n generalised eigenvectors is linearly independent A proof will be given in Algebra-II (Spring semester, and is not required at present Example Consider the system ẋ(t = 2 3 4 x(t = Ax(t Now Spec(A = {,, } = {, }, and the corresponding eigenvectors are found by solving 2 3 v λ = : (A Iv = 4 v 2 = v =, v 3 and λ = : Av = 2 3 4 v v 2 v 3 = v = (ie geometric multiplicity is vs algebraic multiplicity 2 Now examine A 2 = 2 and find v 3 C 3 such that A 2 v 3 = and Av 3, that is, find a generalised eigenvector of order 2 Take eg v 3 = Hence set v 2 := (A λiv 3 = 2 3 4 = (notice that v 2 = 4v is an ordinary eigenvector Then v, v 2, v 3 are linearly independent Hence the general solution is 8 x(t = c e t + c 2 4 + c 3 + t for constants c, c 2, c 3 C 5 Non-Homogeneous Systems 8 4 2 8 4, Let A C n n and consider the system ẋ(t = Ax(t + g(t ( 6

for x : R C n and g : R C n Let Φ(t be a fundamental matrix of the corresponding homogeneous system ẋ(t = Ax(t Seek a solution of ( in the variation of parameters form, ie x(t = Φ(tu(t for some function u : R C n For x(t = Φ(tu(t, ẋ(t = Φ(tu(t + Φ(t u(t = AΦ(tu(t + Φ(t u(t = Ax(t + Φ(t u(t = Ax(t + g(t So the condition on u is that Hence where C C n Hence Φ(t u(t = g(t u(t = Φ (tg(t = u(t = Thus the general solution of ( is given by: t Φ (sg(sds + C, [ ] x(t = Φ(tu(t = Φ(t Φ (sg(sds + C t x(t = Φ(tC + Φ(t which is known as the variation of parameters formula Given the initial value problem C can be determined: x(t = Φ(tC + Φ(t Thus the solution of the initial value problem (3 is given by t Φ (sg(sds, (2 ẋ(t = Ax(t + g(t, x(t = x C n, (3 t Φ (sg(sds x(t = Φ(t C C = Φ (t x x(t = Φ(tΦ (t x + Φ(t t Φ (sg(sds (4 The solutions in these two cases are said to have been obtained by variation of parameters Example Solve the initial value problem Solution ẋ(t = Ax(t + g(t = Firstly, Spec(A = { 2, 2} and Hence λ = 2 v = ẋ = x x 2 + e t, x ( = ẋ 2 = 3x + x 2, x 2 ( = 2 ( 3 ( 3 ( Φ(t = ( e t x(t + (, λ = 2 v 2 = e 2t e 2t (, x( = 2 is a fundamental matrix for the corresponding homogeneous system ẋ(t = Ax(t Finding the inverse gives Φ (t = ( e 2t e 2t 4 3e 2t e 2t, 7

and Φ (x( = 4 ( 3 ( 2 = ( /2 /2 Thus, by variation of parameters, ( ( e x(t = 2t e 2t /2 3e 2t e 2t /2 ( e 2t e + 2t ( ( e 2s e 2s e s 3e 2t e 2t 4 3e 2s e 2s ds Integrating then gives via routine calculation x(t = 2 = ( e 2t + e 2t 3e 2t + e 2t 4 e2t + 4 e 2t 3 4 e2t + e t + 4 e 2t + 4 e 2t e 2t 3e 2t + 4e t e 2t 8

Chapter 2 Laplace Transform 2 Definition and Basic Properties Definition (Laplace Transform The Laplace transform of a function f : [, R is the complex-valued function of the complex variable s defined by L{f(t}(s = ˆf(s := f(te st dt, (2 for such s C that the integral on the right-hand side exists (in the improper sense Remarks If g : R C is a function, then g(tdt := Re(g(tdt + i Im(g(tdt 2 For z C, so e z = e Re z for all z C e z = e Re z+i Im z = e Re z e i Im z, 3 Not all functions possess Laplace transforms Further, the Laplace transform of a function may generally only be defined for certain values of the variable s C, for the improper integral in (2 to make sense The following theorem determines when the improper integral makes sense For the improper integral in (2 to make sense, we need Definition (Exponential Order A function f : [, R is said to be of exponential order if there exist constants α, M R with M > such that f(t Me αt, t [, We prove Theorem (Existence of the Laplace Transform Suppose that a function f : [, R is piecewise continuous and of exponential order with constants α R and M > Then the Laplace transform ˆf(s exists for all s C with Re(s > α 9

Proof Fix s C with Re s > α For all T >, T f(te st dt = T f(t e st dt T = M T Me αt e st dt [ M = α Re s M = Re s α e αt e ( Re st dt = M (Since α Re s <, sending T gives e (α Re(sT Hence T f(te st M dt Re s α Now define, for t [,, α(t = Re(f(te st ; β(t = Im(f(te st ; α + (t = max{α(t, } ; α (t = max{ α(t, } ; β + (t = max{β(t, } ; β (t = max{ β(t, } Note that α(t = α + (t α (t and β(t = β + (t β (t Further, ] t=t st e(α Re t= T [ e (α Re st ] e (α Re st dt M Re s α Hence Similarly, and As a result, exists T α + (tdt T α(t dt T T lim α + (tdt =: α + T T lim α (tdt =: α T T lim β ± (tdt =: β ± T (α + α (t + i(β + β (t = lim T T f(te st dt exists exists, exist (α(t + iβ(tdt = M Re s α < f(te st dt = ˆf(s Example Consider f(t = e ct, where c C Then f(t is of exponential order with M =, α = Re(c: f(t = e ct = e Re(ct 2

Then If Re s > Re c, then ˆf(s = So for s C with Re(s > Re(c, Hence T e ct e st dt = lim e (c st dt T [ = lim T = lim T lim T e(c st = In particular for c =, f(t = with Laplace transform ] T c s e(c st ( c s e(c st c s e ct e st dt = s c L { e ct} (s =, Re s > Re c (22 s c L{} = s The following theorem establishes some basic properties of the Laplace transform Theorem Suppose that f(t and g(t are of exponential order following properties hold: Then for Re(s sufficiently large, the Linearity: For a, b C, L {af(t + bg(t} (s = al {f(t} (s + bl {g(t} (s = a ˆf(s + bĝ(t 2 Transform of a Derivative: 3 Transform of Integral: 4 Damping Formula: L {f (t} (s = s ˆf(s f( (23 { } L f(τdτ (s = s ˆf(s L { e at f(t } (s = ˆf(s + a 5 Delay Formula: For T >, L {f(t T H(t T } (s = e st ˆf(s, where is the Heaviside step function H(t = { t t > Remarks 2

ˆ ˆ Proof Replacing in (23 f by f gives L {f (t} (s = sl {f } (s f ( = s 2 ˆf(s sf( f (, and more generally, for the n-th derivative, inductively { } [ ] L f (n (t (s = s n ˆf(s s n f( + s n 2 f ( + + f (n (, where is the ith derivative of f (exercise f (i (t := di f dt i In property 5, if f(t is undefined for t <, we set f(t = for t < In any case, the delayed function f(t T H(t T coincides with f(t T for t T and vanishes for t < T Follows from the linearity of the integration 2 Integration by parts gives L {f (t} (s = Since f(t is of exponential order, for all s with Re s sufficiently large So 3 Define so that Therefore, by property 2, Hence so 4 For any a C, 5 In this case, f (te st dt = [ f(te st] + s f(te st dt lim t f(te st =, L {f (t} (s = f( + sl {f(t} (s = sl {f(t} (s f( L { e at f(t } (s = F (t = F ( =, f(τdτ, F (t = f(t L {F (t} (s = sl {F (t} (s F ( L {F (t} (s = s L {F (t} (s, { } L f(tdt (s = L {f(t} (s s L {f(t T H(t T } (s = Let τ = t T, so that dτ dt = Hence, L {f(t τh(t τ} (s = e at e st f(tdt = f(t T H(t T e st dt = e (a+st f(tdt = ˆf(a + s T f(t T e st dt f(τe s(τ+t dτ = e st f(τe sτ dτ = e st ˆf(s 22

Examples From the exponential definition of cos : R R, { L {cos at} (s = L 2 eiat + } 2 e iat (s = 2 L { e iat} (s + 2 L { e iat} (s, by linearity of L Therefore, using (22, for Re s >, L {cos at} (s = 2 s ia + 2 s + ia = s s 2 + a 2 2 Using the transform of a derivative, L {sin at} (s = sl { a } cos(at (s + a = s s a s 2 + a 2 + a = a s 2 + a 2 3 Let f(t = t n, n N Then, so by induction (Exercise: Sheet 5 Q 4 t n = n τ n dτ, { τ } L {t n } (s = L n τ n dτ (s = n s L { t n } (s = n! s n+, 4 From example, so by the damping formula, L {cos at} (s = s s 2 + a 2, L { e λt cos(at } (s = L{cos(at}(s + λ = 5 Consider the piecewise continuous function t, t T f(t = T, T < t 2T, t > 2T To find ˆf(s, first write f in terms of the Heaviside step function: so s + λ (s + λ 2 + a 2 f(t = t( H(t T + T (H(t T H(t 2T = t (t T H(t T T H(t 2T, L {f(t} (s = L {t} (s L {(t T H(t T } (s T L {H(t 2T } (s, by linearity Therefore, using the delay formula, L {f(t} (s = s 2 e st s 2 T s e 2sT 23

22 Solving Differential Equations with the Laplace Transform The Laplace transform turns a differential equation into an algebraic equation, the solutions of which can be transformed back into solutions of the differential equation Again, this is best illustrated by example Examples Consider the second order initial value problem Taking the Laplace transform of both sides gives x (t 3x (t + 2x(t =, x( = x ( = L {x 3x + 2x} (s = L {} (s = s whence by linearity Therefore, L {x } (s 3L {x } (s + 2L {x} (s = s, s 2ˆx sx( x ( 3sˆx + 3x( + 2ˆx = s, which gives so s 2ˆx 3sˆx + 2ˆx = s ˆx(s2 3s + 2 = s, ˆx(s = s(s (s 2 Using partial fractions and then inverting Laplace transform gives ˆx(s = /2 s s + /2 s 2 = x(t = 2 et + 2 e2t 2 To solve the initial value problem taking Laplace transforms gives so x (t + 4x(t = cos 3t, x( =, x ( = 3, s 2ˆx sx( x ( + 4ˆx = s s 2 + 9 s2ˆx s + 3 + 4ˆx = Inverting the Laplace transform gives (s 2 + 4ˆx = ˆx(s = ( s s 2 + 4 s 2 + 9 + s 3 = s 3 s 2 + 4 + s (s 2 + 4(s 2 + 9 = s 3 ( s s = 5 x(t = 5 s 2 + 4 + 5 ( 6s 5 s 2 + 4 s 2 + 4 s 2 + 9 s s 2 + 9 (6 cos 2t 52 sin 2t cos 3t s s 2 + 9 s s 2 + 9 + s 3, 24

3 The Laplace Transform is also useful for solving systems of linear ordinary differential equations For example, consider ẋ (t 2x 2 (t = 4t, x ( = 2; ẋ 2 (t + 2x 2 (t 4x (t = 4t 2, x 2 ( = 5 Taking Laplace transforms gives s ˆx x ( 2 ˆx 2 = 4 s 2, s ˆx 2 x 2 ( + 2 ˆx 2 4 ˆx = 4 s 2 2 s and using the initial conditions gives s ˆx 2 2 ˆx 2 = 4 s 2, s ˆx 2 + 5 + 2 ˆx 2 4 ˆx = 4 s 2 2 s Thus s ˆx 2 ˆx 2 = 4 + 2s2 s 2 ( Now (2 + s ( + 2 (2 gives 4 ˆx + (2 + s ˆx 2 = 4 + 2s + 5s2 s 2 (2 so that and hence (s(s + 2 8 ˆx = (2s2 + 4(s + 2 s 2 s2 + 4s + 8 s 2 = 2s3 6s 2 s 2 = 2s 6, ˆx = 2s 6 s 2 + 2s 8 = 2s 6 (s + 4(s 2 = 7/3 s + 4 /3 s 2 x (t = 3 e2t + 7 3 e 4t, x 2 (t = 2ẋ(t 2t = 3 e2t 4 3 e 4t 2t 23 The convolution integral Suppose ˆf(s and ĝ(s are known and consider the product ˆf(sĝ(s Of what function is this the Laplace transform? Equivalently, what is the inverse Laplace transform of ˆf(sĝ(s, { L ˆf(sĝ(s }? Firstly, by the Delay formula Therefore, ˆf(sĝ(s = ˆf(s = = ˆf(sĝ(s = g(τe sτ dτ ˆf(sg(τe sτ dτ by switching the order of integration This gives [ ˆf(sĝ(s = H(t τf(t τe st dt g(τdτ, H(t τf(t τg(τdτ e st dt, ] f(t τg(τdτ e st dt 25

Hence ˆf(sĝ(s is the Laplace Transform of t This motivates the following definition Definition (Convolution The function f(t τg(τdτ is called the convolution of f and g Examples (f g(t := f(t τg(τdτ In order to compute recall that Hence { } L s (s 2 + 2 { } s L s 2 + = L { s 2 + = sin t cos t = = sin t { } L s (s 2 + 2, { } = cos t, L s 2 = sin t + } s s 2 + sin(t τ cos τdτ = cos 2 τdτ cos t (sin t cos τ cos t sin τ cos τdτ sin τ cos τdτ = 2 sin t ( + cos 2τdτ 2 cos t sin 2τdτ = [ 2 sin t τ + ] t 2 sin 2τ + [ ] t 2 cos t 2 cos 2τ = ( 2 sin t t + 2 sin 2t + ( 2 cos t 2 cos 2t 2 = 2 t sin t + 4 (sin 2t sin t + cos t cos 2t 4 cos t = 2 t sin t + 4 (cos(2t t cos t = t sin t 2 Hence { L s (s 2 + 2 } = 2 t sin t L { 2 t sin t } (s = s (s 2 + 2 2 Consider the initial value problem where ÿ(t + y(t = f(t, y( =, ẏ( = f(t = {, t [, ], t > Write f(t = H(t H(t Taking Laplace Transforms of both sides gives s 2 ŷ sy( y ( + ŷ = L {H(t} (s L {H(t } (s, 26

whence and so ŷ = Inverting Laplace transforms, s(s 2 + + s 2 + (s 2 + ŷ = s e s s +, e s s(s 2 + = s s 2 + + s 2 + e s s y(t = sin t + sin t H(t sin t s 2 + Now H(t sin t = = H(t sin(t τh(τ dτ sin(t τdτ = H(t [cos(t τ] t = H(t [ cos(t ] Thus y(t = cos t + sin t + H(t [cos(t ] 3 In order to solve taking Laplace transforms gives and Hence the solution for arbitrary g(t ẍ(t + ω 2 x(t = g(t, ẋ( = x( =, (s 2 + ω 2 ˆx = ĝ = ˆx = ĝ s 2 + ω 2, { } L s 2 + ω 2 = ω sin(ωt { } x(t = L ĝ s 2 + ω 2 = ω sin(ωt g(t = ω sin(ω(t τg(τdτ, 4 Consider { } { } { } L s + 9 s 2 = L s + 9 + 6s + 3 (s + 3 2 = L s + 3 + 4 (s + 3 2 + 4 + 6 (s + 3 2 + 4 (completing the square in the denominator Now { } { } L s + a (s + a 2 + b 2 = e at cos bt, L b (s + a 2 + b 2 = e at sin bt This gives { } { } { } L s + 9 s 2 = L s + 3 + 6s + 3 (s + 3 2 + 3L 2 + 4 (s + 3 2 + 4 = e 3t cos 2t + 3e 3t sin 2t = e 3t (cos 2t + 3 sin 2t 27

Note that the convolution is commutative: Hence (f g(t = f(t τg(τdτ = (f g(t = t f(σg(t σdσ for σ = t τ f(σg(t σdσ = (g f(t Laplace transform can be used for computing matrix exponentials exp(ta Consider the initial value problem Φ(t = AΦ(t, Φ( = I The solution is Φ(t = exp(ta Taking Laplace Transforms gives { } L Φ(t (s = sˆφ(s Φ( = L {AΦ(t} (s = AL {Φ(t} (s = AˆΦ, by linearity Thus sˆφ I = AˆΦ (si AˆΦ = I Finally, (provided that Re s is greater than the real part of each eigenvalue of A Now so ˆΦ = (si A L {exp(ta} (s = (si A, exp(ta = L { (si A } (t Thus, in principle, exp(ta can be computed using the Laplace Transform as follows: ˆ Compute (si A ; ˆ Invert the entries of (s I A, that is, find functions which have Laplace Transforms equal to the entries of (si A Example Consider Then Continuing using partial fractions, A = ( 2 4 2 (( ( (si A s 2 = s 4 2 ( s 2 = 4 s + 2 ( s + 2 2 = s 2 + 2s 8 4 s ( s + 2 2 = (s + 4(s 2 4 s 2/3 s 2 + /3 s + 4 (s I A = 2/3 s 2 2/3 s + 4 = L { (s I A } = /3 s 2 /3 s + 4 /3 s 2 + 2/3 s + 4 2 3 e2t + 3 e 4t 3 e2t 3 e 4t 2 3 e2t 2 3 e 4t 3 e2t + 2 3 e 4t = exp(ta 28

Let us derive the variation of parameters formula via the Laplace transform Consider the initial value problem Taking Laplace transforms gives Hence so that which gives ẋ(t = Ax(t + g(t, x( = x L {ẋ} (s = AL {x} (s + L {g} (s sˆx x = Aˆx + ĝ (si Aˆx = ĝ + x, ˆx = (si A x + (si A ĝ = L {exp(ta} (sx + L {exp(ta} (sĝ, Taking Laplace inverses on both sides yields ˆx = L { e ta} (sx + L { e ta} (sĝ so that as required x(t = exp(tax + exp(ta g = exp(tax + x(t = exp(tax + exp(ta exp((t τag(τdτ, exp( τ Ag(τdτ, 24 The Dirac Delta function What is the inverse Laplace transform of f(t, ie L {}? Let ɛ > and consider the piecewise constant function, t (, ɛ δ ɛ (t = ɛ, otherwise If f : R R is a continuous function, then f(tδ ɛ (tdt = ɛ ɛ f(tdt The Mean Value Theorem for integrals states that for continuous functions f : [a, b] R, there exists ξ (a, b such that f (ξ = b f(tdt b a Hence ξ (, ɛ such that Therefore, f(tδ ɛ (tdt = f (ξ (ɛ = f (ξ ɛ lim f(tδ ɛ (tdt = f(, ɛ by continuity of f This motivates introducing the Dirac Delta function δ(t as an appropriate limit of δ ɛ (t as ɛ : a 29

Definition (Dirac Delta Function The Dirac Delta function δ(t is characterised by the following two properties: ˆ δ(t =, t R \ {}; ˆ For any function f : R R that is continuous on an open interval containing f(tδ(tdt = f( (In fact, δ(t is not a usual function, and its rigorous definition would require using more advances mathematical tools known as Distribution theory or theory of generalised functions, which is beyond the scope of this course Immediate Consequences Setting f(t =, δ(tdt = 2 Let f : R R be continuous in an interval containing a R Then 3 For f(t = e st, where s R is fixed, Hence, formally, so that 4 Further, L {δ(t} (s = (f δ (t = L {δ(t} (s =, f(tδ(t adt = f(a e st δ(tdt = e s = δ(te st dt = f(t τδ(τdτ = Also, by commutativity, f(t δ(t = δ(t f(t = f(t L {} = δ(t δ(te st dt =, f(t τδ(τdτ = f(t 5 Moreover, for T, Hence 6 Finally, Thus L {δ(t T } (s = δ(t T e st dt = δ(t T e st dt = e st L {δ(t T } (s = e st, L { e st } = δ(t T δ(t T f(t = δ(t T τf(τdτ = δ(t T f(t = f(t T H(t T { if t < T ; f(t T if t T 3

Example Solve ÿ + 2ẏ + y = δ(t, y( = 2, ẏ( = 3 Solution Taking the Laplace transform of both sides gives s 2 ŷ sy( ẏ( + 2sŷ 2y( + ŷ = e s, so using the initial conditions, s 2 ŷ 2s 3 + 2sŷ 4 + ŷ = e s (s 2 + 2s + ŷ = e s + 2s + 7, so that Hence so ŷ = ŷ = e s + 2s + 7 s 2 + 2s + e s (s + 2 + 5 (s + 2 + 2 s +, y(t = δ(t te t + 5te t + 2e t = (t e (t H(t + 5te t + 2e t 25 Final value theorem Theorem (Final Value Theorem Let g : [, R satisfy g(t Me αt, for some α, M > (the function g is said to be exponentially decaying Then Proof L {g} ( := by setting σ = t τ Hence g(tdt = lim t (g H(t = L {g} ( g(τdτ = lim L{g}( = lim t t g(τdτ L{g}( = lim t g(t σh(σdσ = lim t (g H(t g(t σdσ, 3