Chapter 6. Systems of First Order Linear Differential Equations

Similar documents
System of Linear Differential Equations

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Math 334 Fall 2011 Homework 11 Solutions

Chapter 2. First Order Scalar Equations

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Two Coupled Oscillators / Normal Modes

Differential Equations

Chapter 3 Boundary Value Problem

t + t sin t t cos t sin t. t cos t sin t dt t 2 = exp 2 log t log(t cos t sin t) = Multiplying by this factor and then integrating, we conclude that

Second Order Linear Differential Equations

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

Concourse Math Spring 2012 Worked Examples: Matrix Methods for Solving Systems of 1st Order Linear Differential Equations

Chapter #1 EEE8013 EEE3001. Linear Controller Design and State Space Analysis

Chapter Three Systems of Linear Differential Equations

Solutions of Sample Problems for Third In-Class Exam Math 246, Spring 2011, Professor David Levermore

After the completion of this section the student. Theory of Linear Systems of ODEs. Autonomous Systems. Review Questions and Exercises

Announcements: Warm-up Exercise:

10. State Space Methods

MA 214 Calculus IV (Spring 2016) Section 2. Homework Assignment 1 Solutions

MATH 4330/5330, Fourier Analysis Section 6, Proof of Fourier s Theorem for Pointwise Convergence

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

EXERCISES FOR SECTION 1.5

Solutions to Assignment 1

Chapter #1 EEE8013 EEE3001. Linear Controller Design and State Space Analysis

Let us start with a two dimensional case. We consider a vector ( x,

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

dy dx = xey (a) y(0) = 2 (b) y(1) = 2.5 SOLUTION: See next page

Some Basic Information about M-S-D Systems

Solutions from Chapter 9.1 and 9.2

u(x) = e x 2 y + 2 ) Integrate and solve for x (1 + x)y + y = cos x Answer: Divide both sides by 1 + x and solve for y. y = x y + cos x

ENGI 9420 Engineering Analysis Assignment 2 Solutions

KEY. Math 334 Midterm I Fall 2008 sections 001 and 003 Instructor: Scott Glasgow

Ordinary Differential Equations

Math 334 Test 1 KEY Spring 2010 Section: 001. Instructor: Scott Glasgow Dates: May 10 and 11.

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Math 10B: Mock Mid II. April 13, 2016

Exam 1 Solutions. 1 Question 1. February 10, Part (A) 1.2 Part (B) To find equilibrium solutions, set P (t) = C = dp

HOMEWORK # 2: MATH 211, SPRING Note: This is the last solution set where I will describe the MATLAB I used to make my pictures.

KEY. Math 334 Midterm III Winter 2008 section 002 Instructor: Scott Glasgow

t 2 B F x,t n dsdt t u x,t dxdt

Math 315: Linear Algebra Solutions to Assignment 6

ME 391 Mechanical Engineering Analysis

Ordinary dierential equations

Math Final Exam Solutions

SOLUTIONS TO ECE 3084

LAPLACE TRANSFORM AND TRANSFER FUNCTION

The expectation value of the field operator.

Math 333 Problem Set #2 Solution 14 February 2003

2. Nonlinear Conservation Law Equations

Mon Apr 9 EP 7.6 Convolutions and Laplace transforms. Announcements: Warm-up Exercise:

8. Basic RL and RC Circuits

Introduction to Probability and Statistics Slides 4 Chapter 4

Predator - Prey Model Trajectories and the nonlinear conservation law

Second Order Linear Differential Equations

KEY. Math 334 Midterm III Fall 2008 sections 001 and 003 Instructor: Scott Glasgow

An Introduction to Malliavin calculus and its applications

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Undetermined coefficients for local fractional differential equations

6.2 Transforms of Derivatives and Integrals.

Chapter 7 Response of First-order RL and RC Circuits

15. Vector Valued Functions

Challenge Problems. DIS 203 and 210. March 6, (e 2) k. k(k + 2). k=1. f(x) = k(k + 2) = 1 x k

Solutions for homework 12

Final Spring 2007

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Chapter 7: Solving Trig Equations

Ordinary Differential Equations

Properties Of Solutions To A Generalized Liénard Equation With Forcing Term

INDEX. Transient analysis 1 Initial Conditions 1

Oscillation of an Euler Cauchy Dynamic Equation S. Huff, G. Olumolode, N. Pennington, and A. Peterson

MTH Feburary 2012 Final term PAPER SOLVED TODAY s Paper

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

Then. 1 The eigenvalues of A are inside R = n i=1 R i. 2 Union of any k circles not intersecting the other (n k)

THE BERNOULLI NUMBERS. t k. = lim. = lim = 1, d t B 1 = lim. 1+e t te t = lim t 0 (e t 1) 2. = lim = 1 2.

Theory of! Partial Differential Equations-I!

DISCRETE GRONWALL LEMMA AND APPLICATIONS

Theory of! Partial Differential Equations!

1 Review of Zero-Sum Games

MATH 31B: MIDTERM 2 REVIEW. x 2 e x2 2x dx = 1. ue u du 2. x 2 e x2 e x2] + C 2. dx = x ln(x) 2 2. ln x dx = x ln x x + C. 2, or dx = 2u du.

MATH 2050 Assignment 9 Winter Do not need to hand in. 1. Find the determinant by reducing to triangular form for the following matrices.

Continuous Time. Time-Domain System Analysis. Impulse Response. Impulse Response. Impulse Response. Impulse Response. ( t) + b 0.

THE WAVE EQUATION. part hand-in for week 9 b. Any dilation v(x, t) = u(λx, λt) of u(x, t) is also a solution (where λ is constant).

MA Study Guide #1

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff

Hamilton- J acobi Equation: Weak S olution We continue the study of the Hamilton-Jacobi equation:

Math 4600: Homework 11 Solutions

Distance Between Two Ellipses in 3D

Y 0.4Y 0.45Y Y to a proper ARMA specification.

THE 2-BODY PROBLEM. FIGURE 1. A pair of ellipses sharing a common focus. (c,b) c+a ROBERT J. VANDERBEI

Morning Time: 1 hour 30 minutes Additional materials (enclosed):

Section 4.4 Logarithmic Properties

1 Solutions to selected problems

CHAPTER 12 DIRECT CURRENT CIRCUITS

IMPLICIT AND INVERSE FUNCTION THEOREMS PAUL SCHRIMPF 1 OCTOBER 25, 2013

Transcription:

Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh change in our noaion for DE s Before, in Chapers 4, we used he leer x for he independen variable, and y for he dependen variable For example, y = sinx, or x dy + xy = sinx Now we will use for he independen dx variable, and x,y,z, or x,x,x 3,x 4, and so on, for he dependen variables For example: x = sin x = cos And when we wrie x, for example, we will henceforh mean dx d The firs order sysems (of ODE s) ha we shall be looking a are sysems of equaions of he form x = expression in x,x, x n, x = expression in x,x, x n, x n = expression in x,x, x n,, valid for in an inerval I These expressions on he righ sides conain no derivaives A firs order IVP sysem would be he same, bu now we also have iniial condiions x (a) = c,x (a) = c,,x n (a) = c n Here a is a fixed number in I, and c,c,,c n are fixed consans Example x = y y = x+ (Where is?) Example Example 3 x = x x x 3 sin()x 3 x = 3x x 3 + x 3 = ex x = y y = 3 x+ y 9, x() = 3,y() = 6 These are called firs order sysems, because he highes derivaive is a firs derivaive Example 3 is a firs order IVP sysem, he iniial condiions are x() = 3,y() = 6 A soluion o such a sysem, is several funcions x = f (),x = f (),,x n = f n () which saisfy all he equaions in he sysem simulaneously A soluion o a firs order IVP sysem also has o saisfy he iniial condiions

For example, a soluion o Ex above is x = + sin,y = cos To check his, noice ha if x = +sin and y = cos, hen clearly x = (+sin) = cos = y, and y = sin = (+sin)+ = x+ So boh equaions are saisfied simulaneously Similarly, a soluion o he firs order IVP sysem in Ex 3 above is x = 3,y = 6 (Check i) Jus as in Chaper, under a mild condiion here always exis soluions o a firs order IVP sysem, and he soluion will be unique, bu local (ha is, i may only exis in a small inerval surrounding a) The proof is almos idenical o he one in Chaper Trick o change higher order ODE s (or sysems) ino firs order sysems: For example consider he ODE y sin(x)y +y xy = cosx Le = x,x = y,x = y,x 3 = y Then y = x 3 We do no inroduce a variable for he highes derivaive We hen obain he following firs order sysem: x = x x = x 3 x 3 = cos+x x +sin()x 3 Sraegy: solve he laer sysem; and if x = f() hen he soluion o he original ODE is y = f(x) So for example if x = 3cos() hen y = 3cos(x) Similarly a higher order IVP like y sin(x)y + y xy = cosx, y() =,y () =,y () = 3, is changed ino a s order IVP sysem (he one in he las paragraph), wih iniial condiions x () =,x () =,x 3 () = 3 Using he same rick, any nh order sysem may be changed ino a firs order sysem Combining he exisence and uniqueness resul a few bulles above, wih he rick jus discussed, we see ha every nh order IVP has a unique local soluion under a mild condiion Linear sysems A firs order linear sysem is a firs order sysem of form x = a ()x +a ()x + +a n ()x n +b () x = a ()x +a ()x + +a n ()x n +b () x n = a n ()x +a n ()x + +a nn ()x n +b n () Examples like or x = y y = 3 x+ xy 9, x = y y = 3 x + y 9,

are no linear (on he righ sides he dependen variables, in his case x and y are only allowed o be muliplied by consans or funcions of We will see some more examples momenarily Marix formulaion of linear sysems The coefficien marix of he las sysem is A() = a ij () Tha is a () a () a n () b () a () a () a n () b () A() =, b() = a n () a n () a nn () b n () hen he sysem may be rewrien as a single marix equaion x = A() x+ b(), x = x x x n, x = x Example Consider he sysem = x e x + x = x The coefficien marix of +cos()x he las sysem is e A() = And b() = cos If we wrie x x for, and x x for, hen he sysem may be rewrien as a single marix equaion x x = x e cos x + x x x n Example The IVP sysem in Example 3 above may be rewrien as a single marix equaion 3 x = x +, x() = 9 6 3 Thus a firs order linear sysem is one ha can be wrien in he form x = A() x + b() Here A() is a marix whose enries depend only on, and b() is a column vecor whose enries depend only on Linear firs order IVP sysems always have (unique) soluions if A() and b() are coninuous; in fac we will give formulae laer for he soluion in he consan coefficien case (ha is when A() is consan, does no depend on ) Vecor funcions: The vecor x above depends on Thus i is a vecor funcion Similarly, b() above is a vecor funcion We call i an n-componen vecor funcion if i has n enries, ha is if i lives in R n

You should hink of a soluion o he marix DE in he Definiion above as a vecor x = 3 funcion For example, you can check ha is a soluion o Example above y = 6 (which was Example 3 before) Wewriehis soluionashe vecor funcion 3 u() = 6 One can check ha indeed u = u + 9 Do i! (We did i in class) 3 The sysem above is called homogeneous if b() = If x = A() x + b(), (N) is no homogeneous hen he associaed homogeneous equaion or reduced equaion is he equaion x = A() x We can rewrie (N) as x A() x = b(), or where D x = x, or simply as (D A()) x = b() L x = b() (N) where L = D A() I is easy o see as before ha L = D A() is linear, ha is: (N) L(c u +c u ) = c L u +c L u Thus he main resuls in Chapers 3 and 5 carry over o give varians valid for firs order linear sysems, wih essenially he same proofs We sae some of hese resuls below Firs we discuss homogeneous firs order linear sysems 6 Homogeneous firs order sysems Here we are looking a x = A() x, (H) for in an inerval I Thus is jus L x = where L = D A() as above We will fix a number n hroughou his secion and he nex, andassume ha we have n variables x,,x n, each a funcion of So A() is an n n marix We will hen refer o (H) someimes as (H) n, reminding us of his fixed number n, so ha eg A() is n n, ec The proofs of he nex several resuls are similar (usually almos idenical) o he maching proofs in Chaper 3 (and Chapers 5 and ) Of course he zero vecor is a soluion of (H) As before, his soluion is called he rivial soluion

Theorem If u and u are soluions o (H) on I hen so is u + u and c u (x) soluions o (H), for any consan c So he sum of any wo soluions of (H) is also a soluion of (H) Also, any consan muliple of a soluion of (H) is also a soluion of (H) Again, a linear combinaion of u, u,, u n is an expression for consans c,,c n c u +c u + +c n u n, The rivial linear combinaion is he one where all he consans c k are zero This of course is zero Theorem Any linear combinaion of soluions o (H) is also a soluion of (H) Two vecor funcions u and v whose domain includes he inerval I, are said o be linearly dependen on I if u is a consan imes v, or v is a consan imes u If hey are no linearly dependen hey are called linearly independen Anoher way o say i: u and v are linearly independen if he only linear combinaion of u and v which equals zero, is he rivial one More generally, u, u,, u k are linearly independen if no one of u, u,, u k is a linear combinaionofheohers, noincluding iself Equivalenly: u, u,, u k arelinearly independen if he only way c u ()+c u ()+ +c k u k () = for all in I, for consans c,,c n, is when all of hese consans c,,c n are zero TheWronskianofnn-componenvecorfuncions u, u,, u n,wrienw( u, u,, u n )() or W() or W( u, u,, u n ), is he deerminan of he marix u : u : : u n This las marix is he marix whose jh column is u j () Proposiion If u, u,, u n are linearly dependen on an inerval I, hen for all in I W( u, u,, u n )() = Proof This follows from he equivalence of () and (8) in he par heorem proved in Homework Corollary If W( u, u,, u n )( ) a some poin in I hen u, u,, u n are linearly independen Theorem There exis n soluions u, u,, u n o (H) n which are linearly independen

Proof Similar o he maching proof in Chaper 3, or we will give a formula for he soluion laer n soluions u, u,, u n o (H) n which are linearly independen, are called a fundamenal se of soluions o (H) Then he marix X() = u : u : : u n me above is called he fundamenal marix Theorem If u, u,, u n are soluions o (H) n on an open inerval I hen eiher W( u, u,, u n )() = for all in I; or W( u, u,, u n )() for all in I This means ha an n n marix X() whose columns are soluions o (H) n is a fundamenal marixif andonlyif X()is inverible forall ini, andif andonlyif X()is inverible for some in I By he par heorem proved in Homework his can be phrased in many equivalen ways If u, u,, u n are soluions o (H) n on I, and if every soluion o (H) n on I is of he form c u +c u + +c n u n, for consans c,,c n, hen we say ha c u +c u + +c n u n is he general soluion o (H) n Theorem Suppose ha u, u,, u n are soluions o (H) n on an open inerval I The following are equivalen: (i) u, u,, u n are a fundamenal se of soluions o (H) on I, (ii) W( u, u,, u n )() for some (or all) in I, (iii) c u +c u + +c n u n is he general soluion o (H) Example Show ha u = sysem 3 3 and v = x = y y = 3 x+ y are a fundamenal se for he linear on he inerval (, ) Also, find he general soluion o his sysem Soluion This is he sysem x = A() x where A() = 3 Check ha u = A() u (we checked his in class), and v = A() v Then noe ha 3 W( u, v) = de( 3 ) = 3 ( ) 3 = 3 = 4 So by he las heorem u, v is a fundamenal se of soluions, and he general soluion o his sysem is C u + D v Tha is, he general soluion is x = C 3 + D and y = 3C D (Explained in more deail in class)

63/64 Homogeneous firs order sysems wih consan coefficiens I-II Here we are looking a x = A x, (H) for in an inerval I Here A is an n n marix wih consans (numbers) as enries Again we are fixing a number n hroughou, and assume ha we have n variables x,,x n, each a funcion of We will again refer o (H) someimes as (H) n, reminding us of his fixed number n, so ha eg A is n n, ec If λ is an eigenvalue of A wih eigenvecor v, se x = e λ v Noe ha x = e λ λ v, whereas A x = e λ A v = e λ λ v So x = A x; ha is e λ v is a soluion o (H) Firssupposehahen nmarixahasnlinearlyindependeneigenvecors v, v,, v n, and ha λ k is he eigenvalue associaed wih he eigenvecor v k Then he general soluion is x = c e λ v +c e λ v + c n e λn v n, where c,c,, are arbirary consans Tha is, a fundamenal se is: e λ v,e λ v,,e λn v n The fundamenal marix X() is he marix wih hese as columns Proof: The Wronskian W() = de(x()) by () (8) in he par heorem in Homework, since X() is he marix wih columns v, v,, v n, which are linearly independen Ofcourse, ifhen nmarixahasndisinc eigenvalues, henifwe findoneeigenvecor for each eigenvalue, hese will be linearly independen by Theorem in Secion 58 Also, from more advanced linear algebra i is known ha if A = A T, ha is if A is symmeric, hen here will exis n linearly independen eigenvecors So he mehod above will work x = x 3y Example Solve y = x+y Soluion This is jus x = A 3 x, where A = We firs find he eigenvalues and corresponding eigenvecors of A We have λ 3 de(a λi) = λ = ( λ)( λ) 6 = λ 3λ 4 = (λ+)(λ 4) Thus he eigenvalues are λ = and 4 To find an eigenvecor corresponding o λ = 4 we need o solve A x = 4 x We solve 3 = 4??

Fromhefirs row, weseeha 3? = 4, soha? = Thus aneigenvecor corresponding o λ = 4 is Similarly, o find an eigenvecor corresponding o λ = we solve 3 =?? Thus 3? = so ha? = Thus an eigenvecor corresponding o λ = is:, 3 /3 3 or, muliplying by 3, we ge he eigenvecor A general soluion o he sysem is hen 3 x = c e 4 +c e, where c,c are arbirary consans We can rewrie his as c e x = 4 +3c e c e 4 +c e This corresponds o x = c e 4 +3c e he soluion y = c e 4 +c e x = x +x x 3 Example Find he general soluion o x = x +3x x 3 Also solve he IVP x 3 = x +x +x 3 consising of his sysem wih iniial condiions x () =,x () =,x 3 () = Soluion This is jus x = A x, where A = 3 One can show ha his marix A has eigenvalues,, (I will skip he work, i is jus as in he previous chaper) For he eigenvalue we only wan one eigenvecor, and you can check ha is an eigenvecor (using he usual mehod as in he previous chaper, similar o wha follows:) To find evecors for he eigenvalue we mus solve (A I) x =, which is x = This has soluion (afer Gauss eliminaion): x = s x = s x 3 = Seing s =, =, hen =,s =, we ge linearly independen eigenvecors ;

Thus he general soluion o he sysem is x = c e + c e + c 3 x = c e +c e c 3 e Reading his row by row, we can rewrie his as x = c e +c e x 3 = c e +c 3 e e To solve he IVP we mus solve c e +c e +c 3 e = Tha is, c +c +c 3 = Using he soluion mehod of your choice (row reducion, inverse, Cramers rule), he soluion is: c =,c =,c 3 = The soluion of he iniial-value problem is x = e + e + c c c 3 = e Reading his row by row, we can rewrie his as x = e x = e +e x 3 = e +e Wha if some of he eigenvalues are complex? If A is a real marix, hen is complex eigenvalues occur in pairs α±iβ, wih α,β real Suppose ha v is an eigenvecor corresponding o an eigenvalue λ = α+iβ Wrie v = r +i s where r and s have only real enries Then in he general soluion o he linear sysem include erms: Ce α ( r cos(β) s sin(β)) + De α ( r sin(β)+ s cos(β)) Here C,D are consans (This also akes care of he α iβ case, so ignore ha case) Example 3 Solve he linear sysem x = 3 3 λ Soluion We have de(a λi) = λ = (3+λ)(+λ)+ = λ +4λ+5 Using he quadraic equaion b± b 4ac you can easily show ha λ = ± i are he a wo eigenvalues To find an eigenvecor corresponding o λ = + i we need o solve A x = ( +i) x So we solve 3 = ( +i)?? x

From he firs row, we see ha 3? = +i, so ha? = i Thus an eigenvecor is We wrie his as r +i s as above: i = + i i Thus, by he discussion above he Example, he general soluion is ( ) ( x = Ae cos sin + Be sin + where A,B are arbirary consans This may be rewrien Reading row by row we ge: x = Ae cos+be sin x = Ae ( cos+sin)+be ( sin cos) cos ), Wha if A is n n, bu you canno find n linearly independen eigenvecors? In his case, if λ is an eigenvalue of A of mulipliciy k, bu here are fewer han k linearly independen eigenvecors for A, hen use generalized eigenvecors for λ For example, if λ is an eigenvalue of mulipliciy, bu i has a mos one linearly independen eigenvecor v, solve he equaion (A λi) w = v Then a linearly independen pair of soluion vecors corresponding o λ are e λ v (as before) and e λ w +e λ v So as par of he general soluion o (H) we will have Ce λ v +De λ ( w + v) If λ is an eigenvalue of mulipliciy 3 and one can only find wo linearly independen eigenvecors v and z, hen a hird soluion vecor corresponding o λ is e λ w+e λ v where w is a soluion o he equaion (A λi) w = v So as par of he general soluion o (H) we will have Ce λ v +De λ z +Ee λ ( z + v) If λ is an eigenvalue of mulipliciy 3, bu i has a mos one linearly independen eigenvecor v, solve he equaions (A λi) w = v, (A λi) z = w Then hree linearly independen soluion vecors corresponding o λ are e λ v (as before) and e λ w +e λ v, and e λ z +e λ w + e λ v So as par of he general soluion o (H) we will have Ce λ v +De λ ( w + v)+ee λ ( z + w + v) Examples worked in class (p 98 3 in Tex) (a) Find he general soluion o he sysem x = x 3

(b) Find he general soluion o he sysem x = Soluion (a) A = of A We have de(a λi) = 3 λ 3 λ 3 x We firs find he eigenvalues and corresponding eigenvecors = ( λ)(3 λ)+ = λ 4λ+4 = (λ ) Thus he eigenvalues are λ = (mulipliciy ) To find an eigenvecor corresponding o λ = we need o solve A x = x We solve (A I) x = The usual Gauss eliminaion (done in class) yields x =, x = so we are only able o find a mos one linearly independen eigenvecor v = The recipe above now ells us o solve (A I) x = v, ha is, x = The usual Gauss eliminaion (done in class) yields a soluion o his of x = s x = s We can ake s o be anyhing, say s =, giving a soluion vecor w = A general soluion o he sysem is hen Ce λ v +De λ ( w + v), or x = Ce +De ( + ),, where c,c are arbirary consans We can rewrie his as x = Ce +De ( ) x = Ce +De (b) Is very similar o (a) and is solved on page 99 of he online exbook Or a leas one can find here, in he second las line of ha soluion, a fundamenal se of hree soluions x, x, x 3 The general soluion is x = c x +c x +c 3 x 3

There is anoher way o solve any linear sysem x = A x, (H) wihou using eigenvalues and eigenvecors a all! Bu we will need Marix exponenials: if A is an n n marix consider he sum I +A+ A + 3! A3 + 4! A4 + This is he power series formula for e A from Calculus II I is no hard o show ha his converges in he sense of Calculus o an n n marix which we wrie as exp(a) Or, simply ge a compuer o add he firs 5 erms of his sum o approximae i wih grea precision Theorem Le Ψ() = exp(a) Then Ψ() is a fundamenal marix for (H) The n columns of his marix are a fundamenal se for (H) (Noe ha he jh column is Ψ() e j ) So he general soluion o (H) is x = Ψ() Indeed, for any vecor a, a soluion o he IVP sysem which is (H) ogeher wih iniial condiion x() = a, is Ψ() a Proof One can show easily ha d (exp(a)) = Aexp(A) = exp(a)a Thus if a is any d vecor, and if v = Ψ() a hen c c c n d d v = d (exp(a)) a = Aexp(A) a = A v d Thus v = Ψ() a is a soluion o he linear sysem x = A x Seing v = e j, we see ha he columns of Ψ() are soluions o (H) As we said earlier, o check Ψ() is a fundamenal marix for (H) we need only check ha Ψ() is inverible However Ψ() = Ψ( ) since exp(a) exp( A) = exp(a A) = exp() = I Since Ψ() = I, if v = Ψ() a hen v() = I a = a Thus v is a soluion o he linear IVP sysem x = A x, x() = a Similarly, a soluion o he linear IVP sysem x = A x, x( ) = a is exp(( )A) a = exp(a)exp( A) a 65 Solving nonhomogeneous sysems Here we are looking a x = A x + b(), (N)

for in an inerval I Here A is an n n marix wih consans (numbers) as enries Again we are fixing a number n hroughou, and assume ha we have n variables x,,x n, each a funcion of The reduced equaion or associaed homogeneous equaion for (N) is x = A x, (H) Key fac: As in Chapers 3 (and 5 and ), finding he general soluion o x = A() x + b(), breaks ino wo seps: Sep : find he general soluion o he associaed homogeneous equaion (H) Sep : Find one soluion (called a paricular soluion) o (N) Then add wha you ge in Seps and (The proof of his is he same as before) The proof is he same as we saw in Chapers 3 (and 5 and ): le L x = x A x This is linear So if x p is a paricular soluion o (N) and x H is a soluion o (H), hen L( x p + x H ) = L x p +L x H = b+ = b If z is any soluion o (N) hen z x p is a soluion o (H), since L( z x p ) = = So if x H is his soluion o (H), z = x p + x H The basic formula in his secion is ha a paricular soluion o he linear sysem x = A() x + b() is given by he formula x = X() X(s) b(s)ds, where X() is he fundamenal marix This is variaion of parameers, and a proof can be found on page 35 of he ex book Les see wha his means, and how o use i in examples: Example Find he general soluion o he linear sysem x 4 3 A = and b() = e = A x + b(), where Noe ha his is he same as asking you o find he general soluion o he linear sysem x = 4x 3x + x = x You need o be able o jump beween boh ways of wriing he x +e sysem Soluion: Firs we solve he associaedhomogeneous sysem x = A x, using hemehods of 63 I will skip he working (see 63), and jus wrie down he answer: 3 x = c e + c e e 3e Thus he fundamenal marix is X() = e e (see he discussion in Secion 6 above) To find a paricular soluion o he original sysem, by he basic formula above, we need o compue X() X(s) b(s)ds

We have X(s) = Hence Now inegrae his: e s 3e s e s e s X(s) = e 3s X(s) b(s) = The deerminan of his is e 3s 3e 3s = e 3s Thus e s 3e s e s e s e s 3e s e s X(s) b(s)ds = e s = s e s e s 3e s e s = ( se s +3)ds (se s e s )ds e s se s +3 se s e s Now ( se s +3)ds = e (+)+3 ; and (se s e s )ds = e ( 4 )+e 3 4 (Check - his jus uses Calculus inegrals) Thus X(s) b(s)ds = paricular soluion e 3e e e e (+)+3 e ( 4 )+e 3 4 e (+)+3 e ( 4 )+e 3 4 = Muliplying his by X() we ge he + 5 4 +(3+)e 9 4 e + 3 +3e 3 e The las sep here looks complicaed, bu is jus algebra (Check i!) On he es, he algebra won work ou quie as ugly Thus by he Key Fac (menioned a he sar of he secion above), he general soluion o our original sysem is + 5 x = 4 +(3+)e 9 4 e 3 + 3 +3e 3 + c e + c e e The general soluion o equaion (N) is c c x = X() +X() X(s) b(s)ds, a c n since he general soluion o he reduced equaion (H) is c c X() c n I follows ha he soluion o he IVP which is equaion (N) wih iniial condiion x(a) = x is x = X() X(s) b(s)ds+x()x(a) x, a where X() is he fundamenal marix

Proof: seing = a in he las formula we ge x(a) = X(a) a a X(s) b(s)ds+x(a)x(a) x = +I x = x So he iniial condiion is saisfied Also, ha las formula equals where X() X(s) b(s)ds+x() w +X() y = X() w = a X(s) b(s)ds+x() c, X(s) b(s)ds, y = X(a) x, c = w + y Bu X() X(s) b(s)ds+x() c is of he form of he general soluion o (N) in he second las bulle