Variation of Parameters

Similar documents
Solving systems of ODEs with Matlab

The Method of Undetermined Coefficients.

Convergence of Fourier Series

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Linear Systems of ODE: Nullclines, Eigenvector lines and trajectories

Sin, Cos and All That

Fourier Sin and Cos Series and Least Squares Convergence

Antiderivatives! Outline. James K. Peterson. January 28, Antiderivatives. Simple Fractional Power Antiderivatives

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

The Method of Laplace Transforms.

Antiderivatives! James K. Peterson. January 28, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Algebraic Properties of Solutions of Linear Systems

Extreme Values and Positive/ Negative Definite Matrix Conditions

Math 308 Week 8 Solutions

Sin, Cos and All That

Homework 9 - Solutions. Math 2177, Lecturer: Alena Erchenko

Complex Numbers. Outline. James K. Peterson. September 19, Complex Numbers. Complex Number Calculations. Complex Functions

Complex Numbers. James K. Peterson. September 19, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Riemann Sums. Outline. James K. Peterson. September 15, Riemann Sums. Riemann Sums In MatLab

Chapter 4: Higher Order Linear Equations

Higher Order Linear Equations

Hölder s and Minkowski s Inequality

Hölder s and Minkowski s Inequality

Homogeneous Linear Systems of Differential Equations with Constant Coefficients

MAE143A Signals & Systems - Homework 1, Winter 2014 due by the end of class Thursday January 16, 2014.

2.3 Terminology for Systems of Linear Equations

Solutions to Homework 3

The First Derivative and Second Derivative Test

Substituting this into the differential equation and solving for u, we have. + u2. du = 3x + c. ( )

Homework 3 Solutions Math 309, Fall 2015

Linear Differential Equations. Problems

Mathematical Methods and its Applications (Solution of assignment-12) Solution 1 From the definition of Fourier transforms, we have.

Linear Independence and the Wronskian

Convergence of Sequences

Differential Equations Practice: 2nd Order Linear: Nonhomogeneous Equations: Variation of Parameters Page 1

JUST THE MATHS UNIT NUMBER LAPLACE TRANSFORMS 3 (Differential equations) A.J.Hobson

MA 266 Review Topics - Exam # 2 (updated)

Integration by Parts Logarithms and More Riemann Sums!

Series Solutions Near an Ordinary Point

Chapter 13: General Solutions to Homogeneous Linear Differential Equations

Differentiating Series of Functions

Derivatives and the Product Rule

Polytechnic Institute of NYU MA 2132 Final Practice Answers Fall 2012

SOLVING DIFFERENTIAL EQUATIONS. Amir Asif. Department of Computer Science and Engineering York University, Toronto, ON M3J1P3

BOUNDARY-VALUE PROBLEMS IN RECTANGULAR COORDINATES

The First Derivative and Second Derivative Test

Math Ordinary Differential Equations

20D - Homework Assignment 4

Predator - Prey Model Trajectories and the nonlinear conservation law

APPPHYS 217 Tuesday 6 April 2010

June 2011 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations

Chapter 4. Higher-Order Differential Equations

Convergence of Sequences

Fourier Series Code. James K. Peterson. April 9, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Math 2142 Homework 5 Part 1 Solutions

= AΦ, Φ(0) = I, = k! tk A k = I + ta t2 A t3 A t4 A 4 +, (3) e ta 1

Cable Convergence. James K. Peterson. May 7, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

2nd-Order Linear Equations

April 15 Math 2335 sec 001 Spring 2014

Today. The geometry of homogeneous and nonhomogeneous matrix equations. Solving nonhomogeneous equations. Method of undetermined coefficients

Homework #6 Solutions

Ordinary Differential Equations (ODEs)

Consequences of Continuity

ORDINARY DIFFERENTIAL EQUATIONS

Ch 10.1: Two Point Boundary Value Problems

Calculus IV - HW 3. Due 7/ Give the general solution to the following differential equations: y = c 1 e 5t + c 2 e 5t. y = c 1 e 2t + c 2 e 4t.

APPM 2360: Midterm exam 3 April 19, 2017

The Existence of the Riemann Integral

The Derivative of a Function

MATH 251 Week 6 Not collected, however you are encouraged to approach all problems to prepare for exam

Math 216 Second Midterm 16 November, 2017

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Solutions of the Sample Problems for the Third In-Class Exam Math 246, Fall 2017, Professor David Levermore

Special Mathematics Laplace Transform

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

UNIVERSITY OF SOUTHAMPTON. A foreign language dictionary (paper version) is permitted provided it contains no notes, additions or annotations.

Matrix Solutions to Linear Systems of ODEs

Math 592 Spring Applying the identity sin(u) + sin(v) = 2 sin ( ) ( sin(f 1 2πt + ϕ) + sin(f 2 2πt + µ) = f1 + f 2 2 sin 2.

MATH 251 Examination II April 3, 2017 FORM A. Name: Student Number: Section:

Runge - Kutta Methods for first and second order models

Project Two. James K. Peterson. March 26, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Fourier Sin and Cos Series and Least Squares Convergence

Higher Order Linear Equations Lecture 7

Partial proof: y = ϕ 1 (t) is a solution to y + p(t)y = 0 implies. Thus y = cϕ 1 (t) is a solution to y + p(t)y = 0 since

Solutions to Homework 5, Introduction to Differential Equations, 3450: , Dr. Montero, Spring y 4y = 48t 3.

Lecture 17: Nonhomogeneous equations. 1 Undetermined coefficients method to find a particular

COMPLEX NUMBERS AND DIFFERENTIAL EQUATIONS

Math 308 Final Exam Practice Problems

Homogeneous Linear Systems and Their General Solutions

Function Composition and Chain Rules

Mathematical Induction

1 Limits and continuity

Homework 3, due February 4, 2015 Math 307 G, J. Winter 2015

More Least Squares Convergence and ODEs

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

4. Higher Order Linear DEs

Linear Second Order ODEs

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland

Differential Equations Practice: 2nd Order Linear: Nonhomogeneous Equations: Undetermined Coefficients Page 1

In this chapter we study several functions that are useful in calculus and other areas of mathematics.

Transcription:

Variation of Parameters James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 13, 218 Outline Variation of Parameters Example One

We eventually want to look at a simple linear partial differential equation called the cable equation which arises in information transmission such as in a model of a neuron. there are many places you can get the requisite background for this kind of model, but here we will concentrate on the use of analysis in physical modeling. First, we need to talk about a technique in differential equations called Variation of Parameters. We can use the ideas of the linear independence of functions in the solution of nonhomogeneous differential equations. Here the vector space is C 2 a, b] for some interval a, b] and we use the linear independence of the two solutions of the homogeneous linear second order model to build a solution to the nonhomogeneous model. Rather than doing this in general, we will focus on a specific model: 2 d 2 x dt 2 x(t) = f (t) x() = 1, x(5) = 4. where is a nonzero number. The homogeneous solution xh solves 2 d 2 x x(t) = dt 2 and has the form xh(t) = Ae t t + Be We want to find a particular solution, called xp, to the model. Hence, we want xp to satisfy 2 x p (t) xp(t) = f (t) Since we don t know the explicit function f we wish to use in the nonhomogeneous equation, a common technique to find the particular solution is the one called Variation of Parameters or VoP. In this technique, we take the homogeneous solution and replace the constants A and B by unknown functions u1(t) and u2(t). Then we see if we can derive conditions that the unknown functions u1 and u2 must satisfy in order to work. So we start by assuming xp(t) = u1(t)e t + u2(t)e t x p(t) = u 1(t)e t 1 u1(t) t e + u 2 (t)e t 1 + u2(t) e t ( ) ( = u 1(t)e t + u 2 (t)e t + u1(t) 1 t 1 ) e + u2(t) e t

We know there are two solutions to the model, e t t and e which are linearly independent functions. Letting x1(t) = e t and x2(t) = e t, consider the system ] ] ] x1(t) x2(t) φ(t) x 1 (t) x 2 (t) = ψ(t) f (t) For each fixed t, the determinant of this system is not zero and so is a unique solution for the value of φ(t) and ψ(t) for each t. Hence, the first row gives us a condition we must impose on our unknown functions u 1 (t) and u 2 (t). We must have u 1(t)e t + u 2 (t)e t = This simplifies the first derivative of xp to be x p(t) = u1(t) 1 e t + u2(t) 1 e t Now take the second derivative to get x p (t) = u 1(t) 1 e t + u 2 (t) 1 e t + u1(t) 1 2 e t + u2(t) 1 2 e t Now plug these derivative expressions into the nonhomogeneous equation to find f (t) = (u 2 2(t) 1 e t u 1 (t) 1 ) ( t e + 2 u2(t) 1 2 e t 1 ) + u1(t) t 2 e ( ) u2(t)e t + u1(t)e t Now factor out the common u1(t) and u2(t) terms to find after a bit of simplifying that f (t) = (u 2 2(t) 1 e t u 1 (t) 1 ) t e +u2(t)(e t t e ) + u1(t)(e t e t ) We see the functions u1 and u2 must satisfy f (t) 2 = u 2(t) 1 e t u 1 (t) 1 t e

This gives us a second condition on the unknown functions u1 and u2. Combining we have u 1(t)e t + u 2 (t)e t = u 1(t) 1 e t + u 2 (t) 1 e t = f (t) 2 This can be rewritten in a matrix form: e t e t 1 e t 1 e t ] ] u 1 (t) u 2 (t) = f (t) 2 ] We then use Cramer s Rule to solve for the unknown functions u 1 and u 2. Let W denote the matrix ] e t e t W = 1 t e 1 e t Then the determinant of W is det(w ) = 2 and by Cramer s Rule ] e t u 1(t) = f (t) 1 2 e t det(w ) = 1 2 f (t)e t and ] e t u 2(t) = 1 t e f (t) 2 det(w ) Thus, integrating, we have u1(t) = 1 t 2 u2(t) = 1 t 2 = 1 2 f t (t)e f (u) e u du f (u) e u du where is a convenient starting point for our integration.

Hence, the particular solution to the nonhomogeneous time independent equation is xp(t) = u1(t) e t + u2(t) e t ( = 1 t ) ( f (u) e u du e t 1 t + f (u) e u du )e t. 2 The general solution is thus x(t) = xh(t) + xp(t) = A1e t + A2e t 1 t t 2 e f (u) e u 1 t du + 2 e t f (u) e u du for any real constants A1 and A2. Finally, note we can rewrite these equations as x(t) = A1e t + A2e t 1 t f (u) e u t 1 t du + 2 2 f (u) e u t du or x(t) = A1e t + A2e t 1 t f (u) ( e u t e u t ) du 2 In applied modeling work, the function ew e w 2 arises frequently enough to be given a name. It is called the hyperbolic sine function and is denoted by the symbol sinh(w). Hence, we can rewrite once more to see x(t) = A1e t + A2e t 1 t ( ) u t f (u) sinh du Finally, sinh is an odd function, so we can pull the minus side inside by reversing the argument into the sinh function. This gives x(t) = A1e t + A2e t 1 t ( ) t u + f (u) sinh du

Since sinh(t/) and cosh(t/) are just linear combinations of e t/ and e t/, we can also rewrite as x(t) = A1 cosh( t ) + A2 sinh( t ) + 1 t ( ) t u f (u) sinh du Now let s use the initial conditions x() = 1 and x(5) = 4. Hence, 1 = x() = A1 4 = x(5) = cosh( 5 ) + A2 sinh( 5 ) + 1 which is solvable for A1 and A2. We find A1 = 1 ( 1 A2 = sinh( 5 ) 4 cosh( 5 ) 1 5 5 ( ) 5 u f (u) sinh du ( 5 u f (u) sinh We can use this technique on lots of models and generate similar solutions. Here is another example. ) ) du Consider u (t) + 9u(t) = 2t u() = 1 u () = 1 The characteristic equation is r 2 + 9 = which has the complex roots ±3i. Hence, the two linearly independent solutions to the homogeneous equation are u1(t) = cos(3t) and u2(t) = sin(3t). We set the homogeneous solution to be uh(t) = Acos(3t) + B sin(3t) where A and B are arbitrary constants. The nonhomogeneous solution is of the form up(t) = φ(t) cos(3t) + ψ(t) sin(3t). We know the functions φ and ψ must then satisfy ] ] ] cos(3t) sin(3t) φ (t) 3 sin(3t) 3 cos(3t) ψ = (t) 2t

Applying Cramer s rule, we have φ (t) = 1 ] sin(3t) = 2 3 2t 3 cos(3t) 3 t sin(3t) and ψ (t) = 1 3 ] cos(3t) = 2 3 sin(3t) 2t 3 t cos(3t) Thus, integrating, we have φ(t) = 2 t u sin(3u) du ψ(t) = 2 t u cos(3u) du. The general solution is therefore u(t) = Acos(3t) + B sin(3t) ( 2 t ) ( 2 t ) u sin(3u) du cos(3t) + u cos(3u) du sin(3t). This can be simplified to u(t) = A cos(3t) + B sin(3t) + 2 t u{sin(3t) cos(3u) sin(3u) cos(3t)} du = A cos(3t) + B sin(3t) + 2 t u sin(3t 3u) du. Applying Leibnitz s rule, we find u (t) = 3A sin(3t) + 3B cos(3t) + 2 t sin(3t 3t) 3 + 2 t 3u cos(3t 3u) du = 3A sin(3t) + 3B cos(3t) + 2 t 3u cos(3t 3u) du Applying the boundary conditions and using Leibnitz s rule, we find 1 = u() = A and 1 = u () = 3B. Thus, A = 1 and B = 1 3 giving the solution u(t) = cos(3t) + 1 3 sin(3t) + 2 t u sin(3t 3u) du.

Consider u (t) + 9u(t) = 2t u() = 1 u(4) = 1 The model is the same as the previous example except for the boundary conditions. We have 1 = u() = A 1 = u(4) = A cos(12) + B sin(12) + 2 3 Thus, since A = 1, we have and so B sin(12) = 1 cos(12) 2 3 4 u sin(12 3u) du. 4 u sin(12 3u) du. B = 1 + cos(12) + 2 4 3 u sin(12 3u) du. sin(12) We can then assemble the solution using these constants. Homework 34 34.1 Use VoP to solve Consider u (t) 4u(t) = f (t) u() = 1 u(8) = 1 where f is a continuous function. 34.2 Consider u (t) + 4u(t) = f (t) u() = 1 u(6) = 1 where f is a continuous function.