MA Ordinary Differential Equations and Control Semester 1 Lecture Notes on ODEs and Laplace Transform (Parts I and II)

Size: px
Start display at page:

Download "MA Ordinary Differential Equations and Control Semester 1 Lecture Notes on ODEs and Laplace Transform (Parts I and II)"

Transcription

1 MA222 - Ordinary Differential Equations and Control Semester Lecture Notes on ODEs and Laplace Transform (Parts I and II Dr J D Evans October 22 Special thanks (in alphabetical order are due to Kenneth Deeley, Martin Killian, Prof E P Ryan and Prof V P Smyshlaev, all of whom have contributed significantly to these notes

2 Contents Revision 2 Systems of Linear Autonomous Ordinary Differential Equations 3 Writing linear ODEs as First-order systems 3 2 Autonomous Homogeneous Systems 4 3 Linearly Independent Solutions 5 4 Fundamental Matrices 2 4 Initial Value Problems Revisited 4 5 Non-Homogeneous Systems 6 2 Laplace Transform 9 2 Definition and Basic Properties 9 22 Solving Differential Equations with the Laplace Transform The convolution integral The Dirac Delta function Final value theorem 3

3 Revision In science and engineering, mathematical models often lead to an equation that contains an unknown function together with some of its derivatives Such an equation is called a differential equation (DE We will use following notation for derivatives of a function x of a scalar argument t: Examples ẋ := dx dt = x (t, ẍ := d2 x dt 2 = x (t, etc Free fall: consider free fall for a body of mass m > the equation of motion is m d2 h dt 2 = mg, where h(t is the height at time t, and mg is the force due to gravity To find the solutions of this problem, rewrite the equation as ḧ(t = g and integrate twice to obtain ḣ(t = gt + c and h(t = 2 gt2 + c t + c 2, for two constants c, c 2 R An interpretation of the constants of integration c and c 2 is as follows: at t =, the height is h( = c 2, so c 2 is the initial height, and similarly ḣ( = c is the initial velocity 2 Radioactive decay: the rate of decay is proportional to the amount x(t of radioactive material present at time t: ẋ(t = k x(t, for some constant k > The solution of this equation is x(t = Ce kt The initial amount of radioactive substance at time t = is x( = C Remark Note that the solutions to differential equations are not unique; for each choice of c and c 2 in the first example there is a solution of the form h(t = ( /2gt 2 + c t + c 2 Likewise, for each choice of constant C in x(t = Ce kt there exists a solution to the problem ẋ(t = kx(t Those integration constants are often found from initial conditions (IC We hence often deal with initial value problems (IVPs, which consist of a differential equation together with some initial conditions Example Solve the initial value problem Solution Rewriting the differential equation gives ẏ(t + ay(t =, y( = 2 ẏ(t y(t = a Since d dt (ln y(t = ẏ(t/y(t, it follows that ln y(t = ẏ(t/y(tdt; hence ( ẏ(t y(t = exp y(t dt = y(t = e adt Hence, y(t = e at+c = Ce at (C := exp(c The initial condition yields y( = 2 C = 2 Therefore, the solution of the initial value problem is the function y : R R given by y(t = 2e at, t R 2

4 Chapter Systems of Linear Autonomous Ordinary Differential Equations Writing linear ODEs as First-order systems Any linear ordinary differential equation, of any order, can be written in terms of a first order system This is best illustrated by an example Example Consider the equation ẍ(t + 2ẋ(t + 3x(t = t ( ( ẋ In this case, set X =, ie X(t is a two-dimensional vector-valued function of t Then x Ẋ = ( ẋ ẍ ( = ẋ 3x 2ẋ + t ( = 3 2 ( ẋ x ( + t (2 So Ẋ = AX + g, where A = ( 3 2 (, g(t = t The system (2 and the equation ( are equivalent in the sense that x(t solves the equation if and only if X solves the system (For more examples for the above reduction see Problem Sheet, QQ 3 This motivates studying first order ODE systems The most general first order system is of the form B(tẋ(t + C(tx(t = f(t, where B, C : R C n n, f, x : R C n Henceforth, we do not employ any special notation for vectors and vector-value functions (like underlining or using bold cases, to simplify the notation, and on the assumption that what is a vector will be clear from the context If B (t exists for all t R, the equation may be rewritten to obtain: ẋ(t = B (tc(tx(t + B (tf(t or where A(t = B (tc(t and g(t = B (tf(t ẋ(t = A(tx(t + g(t, (3 3

5 Definition Equation (3 (in fact, a system of equations is called the standard form of a first order system of ordinary differential equations 2 If A(t and g(t do not depend on t, the system is called autonomous 3 If g, the system (3 is called homogeneous, otherwise (g it is called inhomogeneous 2 Autonomous Homogeneous Systems We consider initial-value problems for autonomous homogeneous systems, ie we assume A is a constant, generally complex-valued, n n matrix Theorem (Existence and Uniqueness of IVPs Let A C n n and x C n Then the initial value problem has a unique solution ẋ(t = Ax(t, x(t = x (4 Proof (Sketch We first argue that a solution to (4 exists and is given by x(t = exp ((t t A x, where exp ((t t A is an appropriately understood matrix exponential Definition (Matrix Exponential For a matrix Y C n n, the function exp( Y : R C n n is defined by exp(ty = k= k! (ty k = I + ty + 2 t2 Y 2 +, t R (5 (We adopt the conventions! =, and Y = I where I is the unit matrix Remarks The following hold true Fact : The series (5 converges for all t R (meaning, the series of matrices converges for each component of the matrix; Fact 2: The derivative satisfies We define x(t = exp((t t Ax Then d (exp(ty = Y exp(ty = exp(ty Y dt x(t = exp((t t Ax = exp(ax = Ix = x Hence x(t = x Also, ẋ(t = A exp((t t Ax = Ax(t, so x(t is a solution To establish uniqueness, let x(t and y(t be two solutions of (4 and consider h(t := x(t y(t Then ḣ = ẋ ẏ = Ax Ay = A(x y = Ah and hence ḣ = Ah and h(t = x(t y(t = We will show that h(t It suffices to show that h(t, where denotes the length of a vector Then t h(t = Ah(sds t Ah(s ds A h(s ds, (6 t t t ( where A := max x Ax x, and we have used the fact that the length of an integral of a vectorfunction is less or equal the integral of a length The function f(t := exp( t t A A h(s ds t satisfies f(t =, and f(t t On the other hand, evaluating f and using (6 we conclude: f (t for t t and f (t for t t [For example for t t, [ ] f (t = exp( (t t A A A h(s ds + h(t t by (6] Taken together this implies that f(t, hence h(t, implying h(t as required 4

6 To compute exp(ta explicitly is generally not so easy (This will be discussed in more detail later We will aim at constructing appropriate number of linearly independent solutions of the system ẋ = Ax Hence first Definition (Linear Independence of Functions Let I R be an interval The vector functions y i : I C n, i =,, m are said to be linearly independent on I if m c i y i (t =, t I implies c = c 2 = = c m = i= The functions y i are said to be linearly dependent if they are not linearly independent, ie if c, c 2,, c m not all zero such that m c i y i (t =, t I ( e t Example Let I = [, ] and y (t = te t Claim y and y 2 are linearly independent i=, y 2 (t = ( t Proof Assume they are not; then c, c 2 not both zero such that c y (t + c 2 y 2 (t =, t I { c e t + c 2 = c te t + c 2 t =, t I If c 2 is non-zero then clearly c is non-zero too (and the other way round, and hence in particular e t = c 2 /c is constant which yields a contradiction Thus y and y 2 are linearly independent For more examples see Sheet QQ Linearly Independent Solutions Consider the homogeneous autonomous system ẋ = Ax, (7 where A C n n and x = x(t : R C n We prove that there exists n and only n linearly independent solutions of (7 Theorem (Existence of n and no more than n Linearly Independent Solutions There exist n linearly independent solutions of system (7 Any other solution can be written as x(t = n c i x i (t, c i C (8 i= Proof Let v,, v n C n be n linearly independent vectors Let x i (t be the unique solutions of the initial value problems ẋ i = Ax i, x(t = v i, i =,, n Now the functions x i (t are linearly independent (see Problem 4 (a Sheet, so the existence of n linearly independent solutions is assured Now let x(t be an arbitrary solution of (7, with x(t = x C n Since the v i, i =,, n, are linearly independent, they form a basis for C n, so in particular c,, c n C such that n x = c i v i i= 5

7 Now define Then, Further, ẏ(t = y(t = n c i ẋ i (t = i= y(t = n c i x i (t, y : R C n i= n n c i Ax i (t = A c i x i (t = Ay(t i= n c i x i (t = i= i= n c i v i = x Hence, by uniqueness of solution to the initial value problem ẏ = Ay with y(t = x, it follows that y(t = x(t A Method for Determining the Solutions Some linearly independent solutions are found by seeking solutions of the form where v C n is a non-zero constant vector In this case, i= x(t = e λt v, (9 ẋ(t = λe λt v, so to satisfy ẋ = Ax, it is required that λe λt v = Ae λt v Av = λv, v Hence x(t = e λt v is a solution of ẋ = Ax if and only if λ is an eigenvalue of A with v C n being corresponding eigenvector Example Consider ẋ(t = ( 4 The eigenvalues of A are found by solving: det(a λi = det ( x(t, hence A = 4 ( λ 4 λ = ( λ 2 4 = λ 2 2λ 3 = (λ 3(λ + = λ = 3, λ 2 = The corresponding eigenvectors (denoting v and v 2 the components of the vector v λ = 3: ( ( 2 v Av = 3v (A 3Iv = 4 2 v 2 = { 2v +v 2 = 4v 2v 2 = ( can take v = v :=, 2 as an eigenvector corresponding to λ = 3 λ = : ( ( 2 v (A + Iv = 4 2 v 2 = { 2v + v 2 = 4v + 2v 2 = ( take v 2 = 2 ( 6

8 So, in summary, x (t = e 3t ( 2, x 2 (t = e t ( 2 are solutions to ẋ = Ax Further, since v and v 2 are linearly independent vectors, the solutions x (t and x 2 (t are linearly independent Thus the general solution to ( is ( ( x(t = c x (t + c 2 x 2 (t x(t = c e 3t + c 2 2 e t, 2 where c, c 2 C are arbitrary constants Corollary Let A C n n If A has n linearly independent eigenvectors, then the general solution of ẋ = Ax is given by n x(t = c i e λit v i, i= where v i are the linearly independent eigenvectors with corresponding eigenvalues λ i, and c i C are arbitrary constants Definition (Notation For A C n n, the characteristic polynomial will be denoted by π A (λ := det(a λi The set of eigenvalues of A (the spectrum of A is denoted by Spec(A = {λ C π A (λ = } As is known from linear algebra, for a matrix A to have n linearly independent eigenvectors (equivalently, for A to be diagonalisable, it would suffice eg if A had n distinct eigenvalues or were symmetric However, a matrix may have less than n linearly independent eigenvectors as the following example illustrates Example An example of a 2 2 matrix which does not have two linearly independent eigenvectors Let ( 2 A =, hence π 2 A (λ = (2 λ 2 = λ = 2, an eigenvalue with algebraic multiplicity 2 A corresponding eigenvector is found by solving ( ( v v 2 = In the latter one can choose only one linearly independent eigenvector eg v = x (t = e 2t ( ( Hence solves ẋ = Ax But the second linearly independent solution still remains to be found Finding the Second Linearly Independent Solution Try x 2 (t = u(te λt, for some function u : R C 2 Substituting gives ẋ 2 (t = u(te λt + λu(te λt = Ax 2 (t u(te λt + λu(te λt = Au(te λt u(t + λu(t = Au(t u(t = (A λiu(t (* 7

9 Now try u(t = a + t b, with a, b C 2 constant vectors Then u(t = b, so substituting into (* gives Comparing coefficients of and t yields b = (A λi(a + t b = (A λia + t(a λib (A λia = b, (A λib = ( Thus one can choose b as the eigenvector already found: b = This enables the equation to be solved for a: This gives Further, x ( = ( ( ( ( ( a (A λia = a 2 = ( with eg a = (( ( x 2 (t = + t e 2t (, x 2 ( =, so x (t and x 2 (t are linearly independent solutions of ẋ(t = ( 2 2 x(t The general solution is therefore given by x(t = c x (t + c 2 x 2 (t = c e 2t ( + c 2 e 2t ( t, where c, c 2 C are arbitrary constants We shall now consider the case when an n n matrix has less than n linearly independent eigenvectors in greater generality Notice that in the above example (A λib =, and (A λia = b while (A λi 2 a = (A λib = Hence while b is an ordinary eigenvector, a may be viewed as a generalised eigenvector This motivates the following generic Definition (Generalised Eigenvectors Let A C n n, λ Spec(A Then a vector v C n is called a generalised eigenvector of order m N, with respect to λ, if the following two conditions hold: ˆ (A λi k v, k m ; ˆ (A λi m v = In the above example, a is a generalised eigenvector of A with respect to λ = 2 of order m = 2 Notice that, in the light of the above definition, the ordinary eigenvectors are generalised eigenvectors of order Following lemma gives a way of constructing a sequence of generalised eigenvectors as long as we have one, of order m 2: Lemma (Constructing Generalised Eigenvectors Let λ Spec(A and v be a generalised eigenvector of order m 2 Then, for k =,, m, the vector is a generalised eigenvector of order m k Proof It is required to show that v m k := (A λi k v 8

10 ˆ (A λi l v m k, l m k ; ˆ (A λi m k v m k = Firstly, whence Moreover, (A λi l v m k = (A λi l (A λi k v = (A λi l+k v, l + k m (A λi l v m k, l m k (A λi m k v m k = (A λi m k (A λi k v = (A λi m v = The need for incorporating the generalised eigenvectors arises as long as there are not enough ordinary eigenvectors, more precisely when the geometric multiplicity is different (ie strictly less than the algebraic multiplicity, defined as follows Definition (Algebraic and Geometric Multiplicity Let A C n n and λ Spec(A Then λ has geometric multiplicity m N if m is the largest number for which m linearly independent eigenvectors exist If m =, λ is said to be a simple eigenvalue The algebraic multiplicity of λ is the multiplicity of λ as a root of π A (λ Examples (i Consider A = ( 2 2 Hence Spec(A = {2} Then, by an earlier example, λ = 2 has geometric multiplicity, but algebraic multiplicity is 2 (ii Consider the matrix Then π A (λ = det A = 2 λ λ 2 λ = ( λ 2 (2 λ Hence Spec(A = {, 2} and λ = has algebraic multiplicity 2, λ = 2 has algebraic multiplicity For the eigenvectors, v λ = : (A Iv = v 2 = v 3 v =, v 2 = are two linearly independent eigenvectors, so λ = has geometric multiplicity 2 Further, λ = 2: (A 2Iv = is an eigenvector Hence λ = 2 is a simple eigenvalue v v 2 v 3 =, v 3 = The following theorem provides a recipe for constructing m linearly independent solutions as long as we have a generalised eigenvector of order m 2 9

11 Theorem Let A C n n and v C n be a generalised eigenvector of order m, with respect to λ Spec(A Define the following m vectors: v = (A λi m v v 2 = (A λi m 2 v Then: v m = (A λiv v m = v ˆ ˆ The vectors v, v 2,, v m, v m are linearly independent; The functions k x k (t = e λt t i i! v k i, k {,, m} i= form a set of linearly independent solutions of ẋ = Ax Proof ˆ Seeking a contradiction, assume that the vectors are linearly dependent, so c,, c m not all zero such that m c i v i = i= Then there exists the last non-zero c j : c j and either j = m or c i = for any j < i m Hence j c k v k = k= j c k (A λi m k v = k= Hence, pre-multiplying by (A λi j gives j (A λi j c k (A λi m k v = k= j c k (A λi m+j k v = k= Now for k =,, j it follows that m + j k m, so (A λi m+j k v = by definition of the generalised eigenvector For k = j, it follows that m + j k = m, so (A λi m v ; this implies that c j = which is a contradiction Hence the vectors are in fact linearly independent, as required ˆ Since v,, v m are linearly independent and x k ( = v k, it follows that x (t,, x m (t are linearly independent functions (cf Sheet Q 4(a It remains to show that ẋ k (t = Ax k (t, k m Indeed, Now for 2 j m, k ẋ k (t = λe λt i= t i k 2 i! v k i + e λt i= t i i! v k i k = e [λ λt t i k 2 i! v t i k i + i! v k i i= i= = e λt [ t k (k! λv + k 2 i= ] t i i! (λv k i + v k i v j = (A λi m j+ v = (A λi(a λi m j v = (A λiv j = Av j λv j ]

12 Thus λv j + v j = Av j, and hence (having also used λv = Av so ẋ k (t = Ax k (t as required k ẋ k (t = e λt i= t i k i! Av k i = Ae λt i= t i i! v k i = Ax k (t, Examples Consider the matrix For the eigenvalue λ =, A = (A Iv = 3, 2 π A (λ = ( λ 2 (3 λ v v 2 v 3 = v = is an eigenvector with respect to λ = (There is only one linearly independent eigenvector, ie the geometric multiplicity is while the algebraic multiplicity is 2 Hence seek v 2 such that (A Iv 2 and (A I 2 v 2 = Notice that (A I 2 = 2 4, 2 so take v 2 = 2 as a generalised eigenvector of order 2 Then set v := (A Iv 2 = 2 (notice v is an ordinary eigenvector, as expected For the eigenvalue λ = 3, so take v 3 = Hence finally set and 2 x (t = e t v = (A 3Iv 3 = e t The general solution to ẋ = Ax is hence: where c, c 2, c 3 C are arbitrary constants = v 3 =,, x 2 (t = e t (v 2 + tv = x 3 (t = e 3t v 3 = 2e 3t e 3t x(t = c x (t + c 2 x 2 (t + c 3 x 3 (t, 2et e t te t,

13 2 Find three linearly independent solutions of In this case, and since ẋ(t = Ax(t = π A (λ = ( λ 3, (A I = there is only one (linearly independent eigenvector, v = x(t, Hence λ = has algebraic multiplicity 3 and geometric multiplicity one Hence a generalised eigenvector v 3 of order three is required, ie such that: (A Iv 3 = v 3 (A I 2 v 3 = and (A I 3 v 3 = [] 3 3 v 3 = We can choose as such v 3 = v 2 := (A Iv 3 = v 3, v := (A I 2 v 3 = (A Iv 2 = Hence, using the main theorem on the existence of solutions, x (t = e t v are three linearly independent solutions 4 Fundamental Matrices x 2 (t = e t (v 2 + t v x 3 (t = e t (v 3 + t v 2 + t2 2 v Then set: Assume that for A C n n n linearly independent solutions x (t,, x n (t of the system ẋ(t = Ax(t have been found Such n vector-functions constitute a fundamental system Given such a fundamental system, there exist constants c i C such that any other solution x(t can be written as x(t = n i= c ix i (t Given a fundamental system we can define fundamental matrix as follows 2

14 Definition (Fundamental Matrix A fundamental matrix for the system ẋ(t = Ax(t is defined by ( Φ(t = x (t x 2 (t x n (t that is, Φ : R C n n, where the ith column is x i : R C n, and x i, i =,, n, are n linearly independent solutions (ie a fundamental system Then Φ(t = (ẋ (t ẋ n (t = (Ax (t Ax n (t = A (x (t x n (t = AΦ(t Also note that Φ (t exists for all t R, since the x i (t are linearly independent Example For A = ( 4 two linearly independent solutions of the system ẋ(t = Ax(t are (see Example in 3 above ( ( x (t = e 3t, x 2 2 (t = e t 2 The functions x (t, x 2 (t constitute a fundamental system, and the matrix ( e 3t e Φ(t = t 2e 3t 2e t is a fundamental matrix for ẋ(t = Ax(t Lemma If Φ(t and Ψ(t are two fundamental matrices for ẋ(t = Ax(t, then C C n n (a constant matrix such that Φ(t = Ψ(tC, t R Proof Write Φ(t = (x (t x n (t and Ψ(t = (y (t y n (t Since y (t,, y n (t constitute a fundamental system, each x j (t can be written in terms of y (t,, y n (t, so there exist constants c ij C such that n x j (t = c ij y i (t, j n i= The above vector identity, for the k-th components, k =,, n, reads Φ kj (t = n i= c ijψ ki (t This is equivalent to Φ(t = Ψ(tC, with C = [c ij ] n n, the matrix with (i, jth entry c ij We show next that the matrix exponential is a fundamental matrix Theorem The matrix function Φ(t = exp(ta is a fundamental matrix for the system ẋ(t = Ax(t Proof d dt (exp(ta = d dt e = k= k! (tak = k=, k! tk A k+ = A k=, k! (tak = A exp(ta Hence Φ(t = AΦ(t Write Φ(t = (x (t x n (t, where x i (t is the ith column of Φ For it follows that x i (t = Φ(te i and thus, e 2 =,, e n = ẋ i (t = Φ(te i = AΦ(te i = Ax i (t Hence ẋ i (t = Ax i (t and x (t,, x n (t are linearly independent functions because x ( = e,, x n ( = e n are linearly independent vectors Therefore, Φ(t = exp(ta is a fundamental matrix, by definition 3

15 4 Initial Value Problems Revisited Let A C n n, and seek Φ : R C n n such that Φ(t = AΦ(t and Φ(t = Φ, for some initial condition Φ C n n If Ψ(t is a fundamental matrix and Ψ(t = Ψ, set Φ(t = Ψ(tΨ Φ Claim Φ(t solves the above initial value problem Proof ˆ Firstly, ˆ Moreover, Φ(t = so the initial condition is satisfied Ψ(tΨ Φ = AΨ(tΨ Φ = AΦ(t Φ(t = Ψ(t Ψ Φ = Ψ Ψ Φ = Φ, Corollary The unique solution to the above initial value problem is given by Proof: Exercise Example Solve the initial value problem Φ(t = exp((t t AΦ Φ(t = ( 4 Φ(t, Φ( = I Solution Firstly, (see the example above ( e 3t e Ψ(t = t 2e 3t 2e t is a fundamental matrix Then, using the result from above, = ( Φ(t = Ψ(tΨ e 3t e = t 2e 3t 2e t ( /2 /4 /2 /4 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t Corollary Let Φ(t be a fundamental matrix of ẋ = Ax Then exp(ta = Φ(tΦ ( Proof Since exp(ta is a fundamental matrix, exp(ta = Φ(tC for some constant matrix C C n n At t =, I = Φ(C, so C = Φ ( (recall that Φ(t is invertible for all t R Hence determining a fundamental matrix essentially determines exp(ta Example From above, the system has a fundamental matrix given by Moreover Ψ(tΨ ( is given by Φ(t = ( 4 Φ(t ( e 3t e Ψ(t = t 2e t 2e 3t 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t 4 = exp (t ( 4

16 An alternative ( view (computing the matrix exponential via diagonalisation: With A = 4, it follows that λ = 3 and λ 2 = are eigenvalues with corresponding eigenvectors v = ( 2 (, v 2 = 2 Write Av = λ v and Av 2 = λ 2 v 2 into one matrix equation: ( λ A (v v 2 = (v v 2 AT = T D, λ 2 for matrices T and D as defined above Notice that D is diagonal and T is invertible since its columns are linearly independent, so A = T DT and T AT = D Generally, given A C n n and T C n n invertible such that then T AT = exp(ta = exp(tt DT = λ λ n k= Returning to the example and taking the above into account, ( ( exp t 4 = 4 ( 2 2, k! tk ( T DT k = k! tk T D k T k= ( = T k! tk D k k= T = T exp(tdt e λt = T T e λnt ( e 3t ( 2 e t 2 = 2 e3t + 2 e t 4 e3t 4 e t e 3t e t 2 e3t + 2 e t The above shows how exp(ta can be computed when A is diagonalisable (equivalently, has n linearly independent eigenvectors, taking which as a basis in fact diagonalises the matrix More generally, when A is not necessarily diagonalisable, we can still present it in some standard form (called Jordan s normal form by incorporating the generalised eigenvectors By Corollary, the columns of a fundamental matrix are of the form e At v, for some v C n Now [ ] e At v = e λti e (A λit v = e λt k! (A λik t k v Thus, to simplify any calculation, seek λ C and v C n such that (A λi k v = for all k m, for some k N; in other words, seek generalised eigenvectors The following fundamental Theorem from linear algebra ensures that there always exist n linearly independent generalised eigenvectors (Hence, we are always able to construct n linearly independent solutions in terms of those, via the earlier developed recipes k= 5

17 Theorem (Primary Decomposition Theorem Let A C n n, and let π A (λ = l (λ j λ mj j= with the λ j distinct eigenvalues of A and l j= m j = n Then, for each j =,, l, m j linearly independent (generalised eigenvectors with respect to λ j, and the combined set of n generalised eigenvectors is linearly independent A proof will be given in Algebra-II (Spring semester, and is not required at present Example Consider the system ẋ(t = x(t = Ax(t Now Spec(A = {,, } = {, }, and the corresponding eigenvectors are found by solving 2 3 v λ = : (A Iv = 4 v 2 = v =, v 3 and λ = : Av = v v 2 v 3 = v = (ie geometric multiplicity is vs algebraic multiplicity 2 Now examine A 2 = 2 and find v 3 C 3 such that A 2 v 3 = and Av 3, that is, find a generalised eigenvector of order 2 Take eg v 3 = Hence set v 2 := (A λiv 3 = = (notice that v 2 = 4v is an ordinary eigenvector Then v, v 2, v 3 are linearly independent Hence the general solution is 8 x(t = c e t + c c 3 + t for constants c, c 2, c 3 C 5 Non-Homogeneous Systems , Let A C n n and consider the system ẋ(t = Ax(t + g(t ( 6

18 for x : R C n and g : R C n Let Φ(t be a fundamental matrix of the corresponding homogeneous system ẋ(t = Ax(t Seek a solution of ( in the variation of parameters form, ie x(t = Φ(tu(t for some function u : R C n For x(t = Φ(tu(t, ẋ(t = Φ(tu(t + Φ(t u(t = AΦ(tu(t + Φ(t u(t = Ax(t + Φ(t u(t = Ax(t + g(t So the condition on u is that Hence where C C n Hence Φ(t u(t = g(t u(t = Φ (tg(t = u(t = Thus the general solution of ( is given by: t Φ (sg(sds + C, [ ] x(t = Φ(tu(t = Φ(t Φ (sg(sds + C t x(t = Φ(tC + Φ(t which is known as the variation of parameters formula Given the initial value problem C can be determined: x(t = Φ(tC + Φ(t Thus the solution of the initial value problem (3 is given by t Φ (sg(sds, (2 ẋ(t = Ax(t + g(t, x(t = x C n, (3 t Φ (sg(sds x(t = Φ(t C C = Φ (t x x(t = Φ(tΦ (t x + Φ(t t Φ (sg(sds (4 The solutions in these two cases are said to have been obtained by variation of parameters Example Solve the initial value problem Solution ẋ(t = Ax(t + g(t = Firstly, Spec(A = { 2, 2} and Hence λ = 2 v = ẋ = x x 2 + e t, x ( = ẋ 2 = 3x + x 2, x 2 ( = 2 ( 3 ( 3 ( Φ(t = ( e t x(t + (, λ = 2 v 2 = e 2t e 2t (, x( = 2 is a fundamental matrix for the corresponding homogeneous system ẋ(t = Ax(t Finding the inverse gives Φ (t = ( e 2t e 2t 4 3e 2t e 2t, 7

19 and Φ (x( = 4 ( 3 ( 2 = ( /2 /2 Thus, by variation of parameters, ( ( e x(t = 2t e 2t /2 3e 2t e 2t /2 ( e 2t e + 2t ( ( e 2s e 2s e s 3e 2t e 2t 4 3e 2s e 2s ds Integrating then gives via routine calculation x(t = 2 = ( e 2t + e 2t 3e 2t + e 2t 4 e2t + 4 e 2t 3 4 e2t + e t + 4 e 2t + 4 e 2t e 2t 3e 2t + 4e t e 2t 8

20 Chapter 2 Laplace Transform 2 Definition and Basic Properties Definition (Laplace Transform The Laplace transform of a function f : [, R is the complex-valued function of the complex variable s defined by L{f(t}(s = ˆf(s := f(te st dt, (2 for such s C that the integral on the right-hand side exists (in the improper sense Remarks If g : R C is a function, then g(tdt := Re(g(tdt + i Im(g(tdt 2 For z C, so e z = e Re z for all z C e z = e Re z+i Im z = e Re z e i Im z, 3 Not all functions possess Laplace transforms Further, the Laplace transform of a function may generally only be defined for certain values of the variable s C, for the improper integral in (2 to make sense The following theorem determines when the improper integral makes sense For the improper integral in (2 to make sense, we need Definition (Exponential Order A function f : [, R is said to be of exponential order if there exist constants α, M R with M > such that f(t Me αt, t [, We prove Theorem (Existence of the Laplace Transform Suppose that a function f : [, R is piecewise continuous and of exponential order with constants α R and M > Then the Laplace transform ˆf(s exists for all s C with Re(s > α 9

21 Proof Fix s C with Re s > α For all T >, T f(te st dt = T f(t e st dt T = M T Me αt e st dt [ M = α Re s M = Re s α e αt e ( Re st dt = M (Since α Re s <, sending T gives e (α Re(sT Hence T f(te st M dt Re s α Now define, for t [,, α(t = Re(f(te st ; β(t = Im(f(te st ; α + (t = max{α(t, } ; α (t = max{ α(t, } ; β + (t = max{β(t, } ; β (t = max{ β(t, } Note that α(t = α + (t α (t and β(t = β + (t β (t Further, ] t=t st e(α Re t= T [ e (α Re st ] e (α Re st dt M Re s α Hence Similarly, and As a result, exists T α + (tdt T α(t dt T T lim α + (tdt =: α + T T lim α (tdt =: α T T lim β ± (tdt =: β ± T (α + α (t + i(β + β (t = lim T T f(te st dt exists exists, exist (α(t + iβ(tdt = M Re s α < f(te st dt = ˆf(s Example Consider f(t = e ct, where c C Then f(t is of exponential order with M =, α = Re(c: f(t = e ct = e Re(ct 2

22 Then If Re s > Re c, then ˆf(s = So for s C with Re(s > Re(c, Hence T e ct e st dt = lim e (c st dt T [ = lim T = lim T lim T e(c st = In particular for c =, f(t = with Laplace transform ] T c s e(c st ( c s e(c st c s e ct e st dt = s c L { e ct} (s =, Re s > Re c (22 s c L{} = s The following theorem establishes some basic properties of the Laplace transform Theorem Suppose that f(t and g(t are of exponential order following properties hold: Then for Re(s sufficiently large, the Linearity: For a, b C, L {af(t + bg(t} (s = al {f(t} (s + bl {g(t} (s = a ˆf(s + bĝ(t 2 Transform of a Derivative: 3 Transform of Integral: 4 Damping Formula: L {f (t} (s = s ˆf(s f( (23 { } L f(τdτ (s = s ˆf(s L { e at f(t } (s = ˆf(s + a 5 Delay Formula: For T >, L {f(t T H(t T } (s = e st ˆf(s, where is the Heaviside step function H(t = { t t > Remarks 2

23 ˆ ˆ Proof Replacing in (23 f by f gives L {f (t} (s = sl {f } (s f ( = s 2 ˆf(s sf( f (, and more generally, for the n-th derivative, inductively { } [ ] L f (n (t (s = s n ˆf(s s n f( + s n 2 f ( + + f (n (, where is the ith derivative of f (exercise f (i (t := di f dt i In property 5, if f(t is undefined for t <, we set f(t = for t < In any case, the delayed function f(t T H(t T coincides with f(t T for t T and vanishes for t < T Follows from the linearity of the integration 2 Integration by parts gives L {f (t} (s = Since f(t is of exponential order, for all s with Re s sufficiently large So 3 Define so that Therefore, by property 2, Hence so 4 For any a C, 5 In this case, f (te st dt = [ f(te st] + s f(te st dt lim t f(te st =, L {f (t} (s = f( + sl {f(t} (s = sl {f(t} (s f( L { e at f(t } (s = F (t = F ( =, f(τdτ, F (t = f(t L {F (t} (s = sl {F (t} (s F ( L {F (t} (s = s L {F (t} (s, { } L f(tdt (s = L {f(t} (s s L {f(t T H(t T } (s = Let τ = t T, so that dτ dt = Hence, L {f(t τh(t τ} (s = e at e st f(tdt = f(t T H(t T e st dt = e (a+st f(tdt = ˆf(a + s T f(t T e st dt f(τe s(τ+t dτ = e st f(τe sτ dτ = e st ˆf(s 22

24 Examples From the exponential definition of cos : R R, { L {cos at} (s = L 2 eiat + } 2 e iat (s = 2 L { e iat} (s + 2 L { e iat} (s, by linearity of L Therefore, using (22, for Re s >, L {cos at} (s = 2 s ia + 2 s + ia = s s 2 + a 2 2 Using the transform of a derivative, L {sin at} (s = sl { a } cos(at (s + a = s s a s 2 + a 2 + a = a s 2 + a 2 3 Let f(t = t n, n N Then, so by induction (Exercise: Sheet 5 Q 4 t n = n τ n dτ, { τ } L {t n } (s = L n τ n dτ (s = n s L { t n } (s = n! s n+, 4 From example, so by the damping formula, L {cos at} (s = s s 2 + a 2, L { e λt cos(at } (s = L{cos(at}(s + λ = 5 Consider the piecewise continuous function t, t T f(t = T, T < t 2T, t > 2T To find ˆf(s, first write f in terms of the Heaviside step function: so s + λ (s + λ 2 + a 2 f(t = t( H(t T + T (H(t T H(t 2T = t (t T H(t T T H(t 2T, L {f(t} (s = L {t} (s L {(t T H(t T } (s T L {H(t 2T } (s, by linearity Therefore, using the delay formula, L {f(t} (s = s 2 e st s 2 T s e 2sT 23

25 22 Solving Differential Equations with the Laplace Transform The Laplace transform turns a differential equation into an algebraic equation, the solutions of which can be transformed back into solutions of the differential equation Again, this is best illustrated by example Examples Consider the second order initial value problem Taking the Laplace transform of both sides gives x (t 3x (t + 2x(t =, x( = x ( = L {x 3x + 2x} (s = L {} (s = s whence by linearity Therefore, L {x } (s 3L {x } (s + 2L {x} (s = s, s 2ˆx sx( x ( 3sˆx + 3x( + 2ˆx = s, which gives so s 2ˆx 3sˆx + 2ˆx = s ˆx(s2 3s + 2 = s, ˆx(s = s(s (s 2 Using partial fractions and then inverting Laplace transform gives ˆx(s = /2 s s + /2 s 2 = x(t = 2 et + 2 e2t 2 To solve the initial value problem taking Laplace transforms gives so x (t + 4x(t = cos 3t, x( =, x ( = 3, s 2ˆx sx( x ( + 4ˆx = s s s2ˆx s ˆx = Inverting the Laplace transform gives (s 2 + 4ˆx = ˆx(s = ( s s s s 3 = s 3 s s (s 2 + 4(s = s 3 ( s s = 5 x(t = 5 s ( 6s 5 s s s s s (6 cos 2t 52 sin 2t cos 3t s s s s s 3, 24

26 3 The Laplace Transform is also useful for solving systems of linear ordinary differential equations For example, consider ẋ (t 2x 2 (t = 4t, x ( = 2; ẋ 2 (t + 2x 2 (t 4x (t = 4t 2, x 2 ( = 5 Taking Laplace transforms gives s ˆx x ( 2 ˆx 2 = 4 s 2, s ˆx 2 x 2 ( + 2 ˆx 2 4 ˆx = 4 s 2 2 s and using the initial conditions gives s ˆx 2 2 ˆx 2 = 4 s 2, s ˆx ˆx 2 4 ˆx = 4 s 2 2 s Thus s ˆx 2 ˆx 2 = 4 + 2s2 s 2 ( Now (2 + s ( + 2 (2 gives 4 ˆx + (2 + s ˆx 2 = 4 + 2s + 5s2 s 2 (2 so that and hence (s(s ˆx = (2s2 + 4(s + 2 s 2 s2 + 4s + 8 s 2 = 2s3 6s 2 s 2 = 2s 6, ˆx = 2s 6 s 2 + 2s 8 = 2s 6 (s + 4(s 2 = 7/3 s + 4 /3 s 2 x (t = 3 e2t e 4t, x 2 (t = 2ẋ(t 2t = 3 e2t 4 3 e 4t 2t 23 The convolution integral Suppose ˆf(s and ĝ(s are known and consider the product ˆf(sĝ(s Of what function is this the Laplace transform? Equivalently, what is the inverse Laplace transform of ˆf(sĝ(s, { L ˆf(sĝ(s }? Firstly, by the Delay formula Therefore, ˆf(sĝ(s = ˆf(s = = ˆf(sĝ(s = g(τe sτ dτ ˆf(sg(τe sτ dτ by switching the order of integration This gives [ ˆf(sĝ(s = H(t τf(t τe st dt g(τdτ, H(t τf(t τg(τdτ e st dt, ] f(t τg(τdτ e st dt 25

27 Hence ˆf(sĝ(s is the Laplace Transform of t This motivates the following definition Definition (Convolution The function f(t τg(τdτ is called the convolution of f and g Examples (f g(t := f(t τg(τdτ In order to compute recall that Hence { } L s (s { } s L s 2 + = L { s 2 + = sin t cos t = = sin t { } L s (s 2 + 2, { } = cos t, L s 2 = sin t + } s s 2 + sin(t τ cos τdτ = cos 2 τdτ cos t (sin t cos τ cos t sin τ cos τdτ sin τ cos τdτ = 2 sin t ( + cos 2τdτ 2 cos t sin 2τdτ = [ 2 sin t τ + ] t 2 sin 2τ + [ ] t 2 cos t 2 cos 2τ = ( 2 sin t t + 2 sin 2t + ( 2 cos t 2 cos 2t 2 = 2 t sin t + 4 (sin 2t sin t + cos t cos 2t 4 cos t = 2 t sin t + 4 (cos(2t t cos t = t sin t 2 Hence { L s (s } = 2 t sin t L { 2 t sin t } (s = s (s Consider the initial value problem where ÿ(t + y(t = f(t, y( =, ẏ( = f(t = {, t [, ], t > Write f(t = H(t H(t Taking Laplace Transforms of both sides gives s 2 ŷ sy( y ( + ŷ = L {H(t} (s L {H(t } (s, 26

28 whence and so ŷ = Inverting Laplace transforms, s(s s 2 + (s 2 + ŷ = s e s s +, e s s(s 2 + = s s s 2 + e s s y(t = sin t + sin t H(t sin t s 2 + Now H(t sin t = = H(t sin(t τh(τ dτ sin(t τdτ = H(t [cos(t τ] t = H(t [ cos(t ] Thus y(t = cos t + sin t + H(t [cos(t ] 3 In order to solve taking Laplace transforms gives and Hence the solution for arbitrary g(t ẍ(t + ω 2 x(t = g(t, ẋ( = x( =, (s 2 + ω 2 ˆx = ĝ = ˆx = ĝ s 2 + ω 2, { } L s 2 + ω 2 = ω sin(ωt { } x(t = L ĝ s 2 + ω 2 = ω sin(ωt g(t = ω sin(ω(t τg(τdτ, 4 Consider { } { } { } L s + 9 s 2 = L s s + 3 (s = L s (s (s (completing the square in the denominator Now { } { } L s + a (s + a 2 + b 2 = e at cos bt, L b (s + a 2 + b 2 = e at sin bt This gives { } { } { } L s + 9 s 2 = L s s + 3 (s L (s = e 3t cos 2t + 3e 3t sin 2t = e 3t (cos 2t + 3 sin 2t 27

29 Note that the convolution is commutative: Hence (f g(t = f(t τg(τdτ = (f g(t = t f(σg(t σdσ for σ = t τ f(σg(t σdσ = (g f(t Laplace transform can be used for computing matrix exponentials exp(ta Consider the initial value problem Φ(t = AΦ(t, Φ( = I The solution is Φ(t = exp(ta Taking Laplace Transforms gives { } L Φ(t (s = sˆφ(s Φ( = L {AΦ(t} (s = AL {Φ(t} (s = AˆΦ, by linearity Thus sˆφ I = AˆΦ (si AˆΦ = I Finally, (provided that Re s is greater than the real part of each eigenvalue of A Now so ˆΦ = (si A L {exp(ta} (s = (si A, exp(ta = L { (si A } (t Thus, in principle, exp(ta can be computed using the Laplace Transform as follows: ˆ Compute (si A ; ˆ Invert the entries of (s I A, that is, find functions which have Laplace Transforms equal to the entries of (si A Example Consider Then Continuing using partial fractions, A = ( (( ( (si A s 2 = s 4 2 ( s 2 = 4 s + 2 ( s = s 2 + 2s 8 4 s ( s = (s + 4(s 2 4 s 2/3 s 2 + /3 s + 4 (s I A = 2/3 s 2 2/3 s + 4 = L { (s I A } = /3 s 2 /3 s + 4 /3 s 2 + 2/3 s e2t + 3 e 4t 3 e2t 3 e 4t 2 3 e2t 2 3 e 4t 3 e2t e 4t = exp(ta 28

30 Let us derive the variation of parameters formula via the Laplace transform Consider the initial value problem Taking Laplace transforms gives Hence so that which gives ẋ(t = Ax(t + g(t, x( = x L {ẋ} (s = AL {x} (s + L {g} (s sˆx x = Aˆx + ĝ (si Aˆx = ĝ + x, ˆx = (si A x + (si A ĝ = L {exp(ta} (sx + L {exp(ta} (sĝ, Taking Laplace inverses on both sides yields ˆx = L { e ta} (sx + L { e ta} (sĝ so that as required x(t = exp(tax + exp(ta g = exp(tax + x(t = exp(tax + exp(ta exp((t τag(τdτ, exp( τ Ag(τdτ, 24 The Dirac Delta function What is the inverse Laplace transform of f(t, ie L {}? Let ɛ > and consider the piecewise constant function, t (, ɛ δ ɛ (t = ɛ, otherwise If f : R R is a continuous function, then f(tδ ɛ (tdt = ɛ ɛ f(tdt The Mean Value Theorem for integrals states that for continuous functions f : [a, b] R, there exists ξ (a, b such that f (ξ = b f(tdt b a Hence ξ (, ɛ such that Therefore, f(tδ ɛ (tdt = f (ξ (ɛ = f (ξ ɛ lim f(tδ ɛ (tdt = f(, ɛ by continuity of f This motivates introducing the Dirac Delta function δ(t as an appropriate limit of δ ɛ (t as ɛ : a 29

31 Definition (Dirac Delta Function The Dirac Delta function δ(t is characterised by the following two properties: ˆ δ(t =, t R \ {}; ˆ For any function f : R R that is continuous on an open interval containing f(tδ(tdt = f( (In fact, δ(t is not a usual function, and its rigorous definition would require using more advances mathematical tools known as Distribution theory or theory of generalised functions, which is beyond the scope of this course Immediate Consequences Setting f(t =, δ(tdt = 2 Let f : R R be continuous in an interval containing a R Then 3 For f(t = e st, where s R is fixed, Hence, formally, so that 4 Further, L {δ(t} (s = (f δ (t = L {δ(t} (s =, f(tδ(t adt = f(a e st δ(tdt = e s = δ(te st dt = f(t τδ(τdτ = Also, by commutativity, f(t δ(t = δ(t f(t = f(t L {} = δ(t δ(te st dt =, f(t τδ(τdτ = f(t 5 Moreover, for T, Hence 6 Finally, Thus L {δ(t T } (s = δ(t T e st dt = δ(t T e st dt = e st L {δ(t T } (s = e st, L { e st } = δ(t T δ(t T f(t = δ(t T τf(τdτ = δ(t T f(t = f(t T H(t T { if t < T ; f(t T if t T 3

32 Example Solve ÿ + 2ẏ + y = δ(t, y( = 2, ẏ( = 3 Solution Taking the Laplace transform of both sides gives s 2 ŷ sy( ẏ( + 2sŷ 2y( + ŷ = e s, so using the initial conditions, s 2 ŷ 2s 3 + 2sŷ 4 + ŷ = e s (s 2 + 2s + ŷ = e s + 2s + 7, so that Hence so ŷ = ŷ = e s + 2s + 7 s 2 + 2s + e s (s (s s +, y(t = δ(t te t + 5te t + 2e t = (t e (t H(t + 5te t + 2e t 25 Final value theorem Theorem (Final Value Theorem Let g : [, R satisfy g(t Me αt, for some α, M > (the function g is said to be exponentially decaying Then Proof L {g} ( := by setting σ = t τ Hence g(tdt = lim t (g H(t = L {g} ( g(τdτ = lim L{g}( = lim t t g(τdτ L{g}( = lim t g(t σh(σdσ = lim t (g H(t g(t σdσ, 3

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide GENERAL INFORMATION AND FINAL EXAM RULES The exam will have a duration of 3 hours. No extra time will be given. Failing to submit your solutions

More information

f(t)e st dt. (4.1) Note that the integral defining the Laplace transform converges for s s 0 provided f(t) Ke s 0t for some constant K.

f(t)e st dt. (4.1) Note that the integral defining the Laplace transform converges for s s 0 provided f(t) Ke s 0t for some constant K. 4 Laplace transforms 4. Definition and basic properties The Laplace transform is a useful tool for solving differential equations, in particular initial value problems. It also provides an example of integral

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Outline Motivation The Laplace Transform The Laplace Transform

More information

e st f (t) dt = e st tf(t) dt = L {t f(t)} s

e st f (t) dt = e st tf(t) dt = L {t f(t)} s Additional operational properties How to find the Laplace transform of a function f (t) that is multiplied by a monomial t n, the transform of a special type of integral, and the transform of a periodic

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Differential equations

Differential equations Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system

More information

Applied Differential Equation. November 30, 2012

Applied Differential Equation. November 30, 2012 Applied Differential Equation November 3, Contents 5 System of First Order Linear Equations 5 Introduction and Review of matrices 5 Systems of Linear Algebraic Equations, Linear Independence, Eigenvalues,

More information

Laplace Transform. Chapter 4

Laplace Transform. Chapter 4 Chapter 4 Laplace Transform It s time to stop guessing solutions and find a systematic way of finding solutions to non homogeneous linear ODEs. We define the Laplace transform of a function f in the following

More information

The Laplace transform

The Laplace transform The Laplace transform Samy Tindel Purdue University Differential equations - MA 266 Taken from Elementary differential equations by Boyce and DiPrima Samy T. Laplace transform Differential equations 1

More information

2nd-Order Linear Equations

2nd-Order Linear Equations 4 2nd-Order Linear Equations 4.1 Linear Independence of Functions In linear algebra the notion of linear independence arises frequently in the context of vector spaces. If V is a vector space over the

More information

Study guide - Math 220

Study guide - Math 220 Study guide - Math 220 November 28, 2012 1 Exam I 1.1 Linear Equations An equation is linear, if in the form y + p(t)y = q(t). Introducing the integrating factor µ(t) = e p(t)dt the solutions is then in

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Theory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli

Theory of Linear Systems Exercises. Luigi Palopoli and Daniele Fontanelli Theory of Linear Systems Exercises Luigi Palopoli and Daniele Fontanelli Dipartimento di Ingegneria e Scienza dell Informazione Università di Trento Contents Chapter. Exercises on the Laplace Transform

More information

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 2 Laplace Transform I 1/52 1/52 ECEN 420 LINEAR CONTROL SYSTEMS Lecture 2 Laplace Transform I Linear Time Invariant Systems A general LTI system may be described by the linear constant coefficient differential equation: a n d n

More information

ODEs Cathal Ormond 1

ODEs Cathal Ormond 1 ODEs Cathal Ormond 2 1. Separable ODEs Contents 2. First Order ODEs 3. Linear ODEs 4. 5. 6. Chapter 1 Separable ODEs 1.1 Definition: An ODE An Ordinary Differential Equation (an ODE) is an equation whose

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS

INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS Definitions and a general fact If A is an n n matrix and f(t) is some given vector function, then the system of differential equations () x (t) Ax(t)

More information

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4)

State will have dimension 5. One possible choice is given by y and its derivatives up to y (4) A Exercise State will have dimension 5. One possible choice is given by y and its derivatives up to y (4 x T (t [ y(t y ( (t y (2 (t y (3 (t y (4 (t ] T With this choice we obtain A B C [ ] D 2 3 4 To

More information

(f g)(t) = Example 4.5.1: Find f g the convolution of the functions f(t) = e t and g(t) = sin(t). Solution: The definition of convolution is,

(f g)(t) = Example 4.5.1: Find f g the convolution of the functions f(t) = e t and g(t) = sin(t). Solution: The definition of convolution is, .5. Convolutions and Solutions Solutions of initial value problems for linear nonhomogeneous differential equations can be decomposed in a nice way. The part of the solution coming from the initial data

More information

Introduction to Modern Control MT 2016

Introduction to Modern Control MT 2016 CDT Autonomous and Intelligent Machines & Systems Introduction to Modern Control MT 2016 Alessandro Abate Lecture 2 First-order ordinary differential equations (ODE) Solution of a linear ODE Hints to nonlinear

More information

A sufficient condition for the existence of the Fourier transform of f : R C is. f(t) dt <. f(t) = 0 otherwise. dt =

A sufficient condition for the existence of the Fourier transform of f : R C is. f(t) dt <. f(t) = 0 otherwise. dt = Fourier transform Definition.. Let f : R C. F [ft)] = ˆf : R C defined by The Fourier transform of f is the function F [ft)]ω) = ˆfω) := ft)e iωt dt. The inverse Fourier transform of f is the function

More information

Applied Differential Equation. October 22, 2012

Applied Differential Equation. October 22, 2012 Applied Differential Equation October 22, 22 Contents 3 Second Order Linear Equations 2 3. Second Order linear homogeneous equations with constant coefficients.......... 4 3.2 Solutions of Linear Homogeneous

More information

2.161 Signal Processing: Continuous and Discrete Fall 2008

2.161 Signal Processing: Continuous and Discrete Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 2.6 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS

More information

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS II: Homogeneous Linear Systems with Constant Coefficients David Levermore Department of Mathematics University of Maryland 28 January 2012 Because

More information

Systems Analysis and Control

Systems Analysis and Control Systems Analysis and Control Matthew M. Peet Arizona State University Lecture 5: Calculating the Laplace Transform of a Signal Introduction In this Lecture, you will learn: Laplace Transform of Simple

More information

Modeling and Simulation with ODE for MSE

Modeling and Simulation with ODE for MSE Zentrum Mathematik Technische Universität München Prof. Dr. Massimo Fornasier WS 6/7 Dr. Markus Hansen Sheet 7 Modeling and Simulation with ODE for MSE The exercises can be handed in until Wed, 4..6,.

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Control Systems. Frequency domain analysis. L. Lanari

Control Systems. Frequency domain analysis. L. Lanari Control Systems m i l e r p r a in r e v y n is o Frequency domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic

More information

Laplace Transforms Chapter 3

Laplace Transforms Chapter 3 Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first. Laplace transforms play a key role in important

More information

Name: Solutions Final Exam

Name: Solutions Final Exam Instructions. Answer each of the questions on your own paper. Put your name on each page of your paper. Be sure to show your work so that partial credit can be adequately assessed. Credit will not be given

More information

APPM 2360: Midterm exam 3 April 19, 2017

APPM 2360: Midterm exam 3 April 19, 2017 APPM 36: Midterm exam 3 April 19, 17 On the front of your Bluebook write: (1) your name, () your instructor s name, (3) your lecture section number and (4) a grading table. Text books, class notes, cell

More information

3.5 Undetermined Coefficients

3.5 Undetermined Coefficients 3.5. UNDETERMINED COEFFICIENTS 153 11. t 2 y + ty + 4y = 0, y(1) = 3, y (1) = 4 12. t 2 y 4ty + 6y = 0, y(0) = 1, y (0) = 1 3.5 Undetermined Coefficients In this section and the next we consider the nonhomogeneous

More information

MA 266 Review Topics - Exam # 2 (updated)

MA 266 Review Topics - Exam # 2 (updated) MA 66 Reiew Topics - Exam # updated Spring First Order Differential Equations Separable, st Order Linear, Homogeneous, Exact Second Order Linear Homogeneous with Equations Constant Coefficients The differential

More information

Systems of Linear Differential Equations Chapter 7

Systems of Linear Differential Equations Chapter 7 Systems of Linear Differential Equations Chapter 7 Doreen De Leon Department of Mathematics, California State University, Fresno June 22, 25 Motivating Examples: Applications of Systems of First Order

More information

Math 266 Midterm Exam 2

Math 266 Midterm Exam 2 Math 266 Midterm Exam 2 March 2st 26 Name: Ground Rules. Calculator is NOT allowed. 2. Show your work for every problem unless otherwise stated (partial credits are available). 3. You may use one 4-by-6

More information

MA2327, Problem set #3 (Practice problems with solutions)

MA2327, Problem set #3 (Practice problems with solutions) MA2327, Problem set #3 (Practice problems with solutions) 5 2 Compute the matrix exponential e ta in the case that A = 2 5 2 Compute the matrix exponential e ta in the case that A = 5 3 Find the unique

More information

Solutions to Math 53 Math 53 Practice Final

Solutions to Math 53 Math 53 Practice Final Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points

More information

The Laplace Transform

The Laplace Transform C H A P T E R 6 The Laplace Transform Many practical engineering problems involve mechanical or electrical systems acted on by discontinuous or impulsive forcing terms. For such problems the methods described

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Applications of Eigenvalues and Eigenvectors 22.2 Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration

More information

ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. JANUARY 3, 25 Summary. This is an introduction to ordinary differential equations.

More information

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has Eigen Methods Math 246, Spring 2009, Professor David Levermore Eigenpairs Let A be a real n n matrix A number λ possibly complex is an eigenvalue of A if there exists a nonzero vector v possibly complex

More information

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

REVIEW NOTES FOR MATH 266

REVIEW NOTES FOR MATH 266 REVIEW NOTES FOR MATH 266 MELVIN LEOK 1.1: Some Basic Mathematical Models; Direction Fields 1. You should be able to match direction fields to differential equations. (see, for example, Problems 15-20).

More information

1 Systems of Differential Equations

1 Systems of Differential Equations March, 20 7- Systems of Differential Equations Let U e an open suset of R n, I e an open interval in R and : I R n R n e a function from I R n to R n The equation ẋ = ft, x is called a first order ordinary

More information

Math 215/255 Final Exam (Dec 2005)

Math 215/255 Final Exam (Dec 2005) Exam (Dec 2005) Last Student #: First name: Signature: Circle your section #: Burggraf=0, Peterson=02, Khadra=03, Burghelea=04, Li=05 I have read and understood the instructions below: Please sign: Instructions:.

More information

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation:

SYSTEMTEORI - ÖVNING 1. In this exercise, we will learn how to solve the following linear differential equation: SYSTEMTEORI - ÖVNING 1 GIANANTONIO BORTOLIN AND RYOZO NAGAMUNE In this exercise, we will learn how to solve the following linear differential equation: 01 ẋt Atxt, xt 0 x 0, xt R n, At R n n The equation

More information

Mathematics for Engineers II. lectures. Differential Equations

Mathematics for Engineers II. lectures. Differential Equations Differential Equations Examples for differential equations Newton s second law for a point mass Consider a particle of mass m subject to net force a F. Newton s second law states that the vector acceleration

More information

Section 8.2 : Homogeneous Linear Systems

Section 8.2 : Homogeneous Linear Systems Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av

More information

Discrete and continuous dynamic systems

Discrete and continuous dynamic systems Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty

More information

we get y 2 5y = x + e x + C: From the initial condition y(0) = 1, we get 1 5 = 0+1+C; so that C = 5. Completing the square to solve y 2 5y = x + e x 5

we get y 2 5y = x + e x + C: From the initial condition y(0) = 1, we get 1 5 = 0+1+C; so that C = 5. Completing the square to solve y 2 5y = x + e x 5 Math 24 Final Exam Solution 17 December 1999 1. Find the general solution to the differential equation ty 0 +2y = sin t. Solution: Rewriting the equation in the form (for t 6= 0),we find that y 0 + 2 t

More information

Identification Methods for Structural Systems

Identification Methods for Structural Systems Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Math 216 Second Midterm 16 November, 2017

Math 216 Second Midterm 16 November, 2017 Math 216 Second Midterm 16 November, 2017 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Linear Differential Equations. Problems

Linear Differential Equations. Problems Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =

More information

Solution of Linear State-space Systems

Solution of Linear State-space Systems Solution of Linear State-space Systems Homogeneous (u=0) LTV systems first Theorem (Peano-Baker series) The unique solution to x(t) = (t, )x 0 where The matrix function is given by is called the state

More information

(an improper integral)

(an improper integral) Chapter 7 Laplace Transforms 7.1 Introduction: A Mixing Problem 7.2 Definition of the Laplace Transform Def 7.1. Let f(t) be a function on [, ). The Laplace transform of f is the function F (s) defined

More information

Exam Basics. midterm. 1 There will be 9 questions. 2 The first 3 are on pre-midterm material. 3 The next 1 is a mix of old and new material.

Exam Basics. midterm. 1 There will be 9 questions. 2 The first 3 are on pre-midterm material. 3 The next 1 is a mix of old and new material. Exam Basics 1 There will be 9 questions. 2 The first 3 are on pre-midterm material. 3 The next 1 is a mix of old and new material. 4 The last 5 questions will be on new material since the midterm. 5 60

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

MA 527 first midterm review problems Hopefully final version as of October 2nd

MA 527 first midterm review problems Hopefully final version as of October 2nd MA 57 first midterm review problems Hopefully final version as of October nd The first midterm will be on Wednesday, October 4th, from 8 to 9 pm, in MTHW 0. It will cover all the material from the classes

More information

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control Ahmad F. Taha EE 3413: Analysis and Desgin of Control Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/

More information

Math 3313: Differential Equations Laplace transforms

Math 3313: Differential Equations Laplace transforms Math 3313: Differential Equations Laplace transforms Thomas W. Carr Department of Mathematics Southern Methodist University Dallas, TX Outline Introduction Inverse Laplace transform Solving ODEs with Laplace

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1 Matrix Exponentials Math 26, Fall 200, Professor David Levermore We now consider the homogeneous constant coefficient, vector-valued initial-value problem dx (1) = Ax, x(t I) = x I, where A is a constant

More information

Control Systems. Laplace domain analysis

Control Systems. Laplace domain analysis Control Systems Laplace domain analysis L. Lanari outline introduce the Laplace unilateral transform define its properties show its advantages in turning ODEs to algebraic equations define an Input/Output

More information

Ordinary Differential Equations. Session 7

Ordinary Differential Equations. Session 7 Ordinary Differential Equations. Session 7 Dr. Marco A Roque Sol 11/16/2018 Laplace Transform Among the tools that are very useful for solving linear differential equations are integral transforms. An

More information

Ordinary Differential Equation Theory

Ordinary Differential Equation Theory Part I Ordinary Differential Equation Theory 1 Introductory Theory An n th order ODE for y = y(t) has the form Usually it can be written F (t, y, y,.., y (n) ) = y (n) = f(t, y, y,.., y (n 1) ) (Implicit

More information

A Second Course in Elementary Differential Equations

A Second Course in Elementary Differential Equations A Second Course in Elementary Differential Equations Marcel B Finan Arkansas Tech University c All Rights Reserved August 3, 23 Contents 28 Calculus of Matrix-Valued Functions of a Real Variable 4 29 nth

More information

ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 4884 NOVEMBER 9, 7 Summary This is an introduction to ordinary differential equations We

More information

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous):

= A(λ, t)x. In this chapter we will focus on the case that the matrix A does not depend on time (so that the ODE is autonomous): Chapter 2 Linear autonomous ODEs 2 Linearity Linear ODEs form an important class of ODEs They are characterized by the fact that the vector field f : R m R p R R m is linear at constant value of the parameters

More information

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is

1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important

More information

9.5 The Transfer Function

9.5 The Transfer Function Lecture Notes on Control Systems/D. Ghose/2012 0 9.5 The Transfer Function Consider the n-th order linear, time-invariant dynamical system. dy a 0 y + a 1 dt + a d 2 y 2 dt + + a d n y 2 n dt b du 0u +

More information

Practice Problems For Test 3

Practice Problems For Test 3 Practice Problems For Test 3 Power Series Preliminary Material. Find the interval of convergence of the following. Be sure to determine the convergence at the endpoints. (a) ( ) k (x ) k (x 3) k= k (b)

More information

A brief introduction to ordinary differential equations

A brief introduction to ordinary differential equations Chapter 1 A brief introduction to ordinary differential equations 1.1 Introduction An ordinary differential equation (ode) is an equation that relates a function of one variable, y(t), with its derivative(s)

More information

Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals

Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals Math 353 Lecture Notes Week 6 Laplace Transform: Fundamentals J. Wong (Fall 217) October 7, 217 What did we cover this week? Introduction to the Laplace transform Basic theory Domain and range of L Key

More information

Modal Decomposition and the Time-Domain Response of Linear Systems 1

Modal Decomposition and the Time-Domain Response of Linear Systems 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING.151 Advanced System Dynamics and Control Modal Decomposition and the Time-Domain Response of Linear Systems 1 In a previous handout

More information

Lecture 29. Convolution Integrals and Their Applications

Lecture 29. Convolution Integrals and Their Applications Math 245 - Mathematics of Physics and Engineering I Lecture 29. Convolution Integrals and Their Applications March 3, 212 Konstantin Zuev (USC) Math 245, Lecture 29 March 3, 212 1 / 13 Agenda Convolution

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Chapter 6: The Laplace Transform 6.3 Step Functions and

Chapter 6: The Laplace Transform 6.3 Step Functions and Chapter 6: The Laplace Transform 6.3 Step Functions and Dirac δ 2 April 2018 Step Function Definition: Suppose c is a fixed real number. The unit step function u c is defined as follows: u c (t) = { 0

More information

Solution via Laplace transform and matrix exponential

Solution via Laplace transform and matrix exponential EE263 Autumn 2015 S. Boyd and S. Lall Solution via Laplace transform and matrix exponential Laplace transform solving ẋ = Ax via Laplace transform state transition matrix matrix exponential qualitative

More information

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems. Review Outline To review for the final, look over the following outline and look at problems from the book and on the old exam s and exam reviews to find problems about each of the following topics.. Basics

More information

ENGIN 211, Engineering Math. Laplace Transforms

ENGIN 211, Engineering Math. Laplace Transforms ENGIN 211, Engineering Math Laplace Transforms 1 Why Laplace Transform? Laplace transform converts a function in the time domain to its frequency domain. It is a powerful, systematic method in solving

More information

Practice Problems For Test 3

Practice Problems For Test 3 Practice Problems For Test 3 Power Series Preliminary Material. Find the interval of convergence of the following. Be sure to determine the convergence at the endpoints. (a) ( ) k (x ) k (x 3) k= k (b)

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

2.161 Signal Processing: Continuous and Discrete Fall 2008

2.161 Signal Processing: Continuous and Discrete Fall 2008 MIT OpenCourseWare http://ocw.mit.edu.6 Signal Processing: Continuous and Discrete Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland 9 December Because the presentation of this material in

More information

Math 308 Exam II Practice Problems

Math 308 Exam II Practice Problems Math 38 Exam II Practice Problems This review should not be used as your sole source for preparation for the exam. You should also re-work all examples given in lecture and all suggested homework problems..

More information

Nonlinear Systems and Elements of Control

Nonlinear Systems and Elements of Control Nonlinear Systems and Elements of Control Rafal Goebel April 3, 3 Brief review of basic differential equations A differential equation is an equation involving a function and one or more of the function

More information

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland 6 April Because the presentation of this material in lecture

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Jordan Canonical Form

Jordan Canonical Form October 26, 2005 15-1 Jdan Canonical Fm Suppose that A is an n n matrix with characteristic polynomial. p(λ) = (λ λ 1 ) m 1... (λ λ s ) ms and generalized eigenspaces V j = ker(a λ j I) m j. Let L be the

More information

Computing inverse Laplace Transforms.

Computing inverse Laplace Transforms. Review Exam 3. Sections 4.-4.5 in Lecture Notes. 60 minutes. 7 problems. 70 grade attempts. (0 attempts per problem. No partial grading. (Exceptions allowed, ask you TA. Integration table included. Complete

More information