MIDTERM REVIEW AND SAMPLE EXAM. Contents

Similar documents
Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

20D - Homework Assignment 5

Chapter 4. Systems of ODEs. Phase Plane. Qualitative Methods

Math 20D: Form B Final Exam Dec.11 (3:00pm-5:50pm), Show all of your work. No credit will be given for unsupported answers.

Math 4B Notes. Written by Victoria Kala SH 6432u Office Hours: T 12:45 1:45pm Last updated 7/24/2016

Applied Differential Equation. November 30, 2012

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

7 Planar systems of linear ODE

Exam Basics. midterm. 1 There will be 9 questions. 2 The first 3 are on pre-midterm material. 3 The next 1 is a mix of old and new material.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

µ = e R p(t)dt where C is an arbitrary constant. In the presence of an initial value condition

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Math53: Ordinary Differential Equations Autumn 2004

Math 331 Homework Assignment Chapter 7 Page 1 of 9

Math 322. Spring 2015 Review Problems for Midterm 2

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

+ i. cos(t) + 2 sin(t) + c 2.

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Section 9.8 Higher Order Linear Equations

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Copyright (c) 2006 Warren Weckesser

Nonconstant Coefficients

Math Ordinary Differential Equations

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations

Homogeneous Constant Matrix Systems, Part II

Lecture 1: Review of linear algebra

NOTES on LINEAR ALGEBRA 1

Section 4.7: Variable-Coefficient Equations

Linear Homogeneous ODEs of the Second Order with Constant Coefficients. Reduction of Order

Foundations of Matrix Analysis

Online Exercises for Linear Algebra XM511

Chapter 7. Linear Algebra: Matrices, Vectors,

ORDINARY DIFFERENTIAL EQUATIONS

= 2e t e 2t + ( e 2t )e 3t = 2e t e t = e t. Math 20D Final Review

A: Brief Review of Ordinary Differential Equations

Math 215/255: Elementary Differential Equations I Harish N Dixit, Department of Mathematics, UBC

MA 527 first midterm review problems Hopefully final version as of October 2nd

Lecture Summaries for Linear Algebra M51A

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Second Order Linear Equations

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

Second-Order Linear ODEs

APPM 2360: Final Exam 10:30am 1:00pm, May 6, 2015.

MATH 308 Differential Equations

ODEs Cathal Ormond 1

Study Guide for Linear Algebra Exam 2

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Math 216 Second Midterm 28 March, 2013

1. General Vector Spaces

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Math 3301 Homework Set Points ( ) ( ) I ll leave it to you to verify that the eigenvalues and eigenvectors for this matrix are, ( ) ( ) ( ) ( )

Linear Second Order ODEs

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Math 308 Final Exam Practice Problems

Math 3108: Linear Algebra

Sign the pledge. On my honor, I have neither given nor received unauthorized aid on this Exam : 11. a b c d e. 1. a b c d e. 2.

STABILITY. Phase portraits and local stability

Work sheet / Things to know. Chapter 3

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Review problems for MA 54, Fall 2004.

Second order linear equations

Stability of Dynamical systems

Homogeneous Equations with Constant Coefficients

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Department of Mathematics IIT Guwahati

MATH 4B Differential Equations, Fall 2016 Final Exam Study Guide

Chapter Two Elements of Linear Algebra

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Study guide - Math 220

ORDINARY DIFFERENTIAL EQUATIONS

dx n a 1(x) dy

Math 266: Phase Plane Portrait

Math Matrix Algebra

Systems of Algebraic Equations and Systems of Differential Equations

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

REVIEW NOTES FOR MATH 266

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Lecture 6. Eigen-analysis

APPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014

Solutions to Math 53 Math 53 Practice Final

Chapter 4: Higher Order Linear Equations

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Linear Algebra 1 Exam 2 Solutions 7/14/3

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Numerical Linear Algebra Homework Assignment - Week 2

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Advanced Mathematics for Economics, course Juan Pablo Rincón Zapatero

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

AMATH 351 Mar 15, 2013 FINAL REVIEW. Instructor: Jiri Najemnik

LINEAR ALGEBRA SUMMARY SHEET.

Exam II Review: Selected Solutions and Answers

Polytechnic Institute of NYU MA 2132 Final Practice Answers Fall 2012

4. Linear transformations as a vector space 17

Nonlinear Autonomous Systems of Differential

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

A Brief Outline of Math 355

1. Why don t we have to worry about absolute values in the general form for first order differential equations with constant coefficients?

Transcription:

MIDTERM REVIEW AND SAMPLE EXAM Abstract These notes outline the material for the upcoming exam Note that the review is divided into the two main topics we have covered thus far, namely, ordinary differential equations and linear algebra with applications to solving linear systems Contents 1 Ordinary Differential Equations 1 11 Reduction to a first-order system 2 12 First-order systems of ODEs: existence and uniqueness theory 4 13 Autonomous first-order systems: critical points, linearization, and stability analysis 8 14 Solving ODEs: analytic methods and numerical methods to be covered later) 12 2 Linear Algebra 15 21 Vectors and Matrices 15 22 Vector Spaces 17 23 Linear systems of equations 20 3 Midterm Sample Exam 23 1 Ordinary Differential Equations An nth-order ordinary differential equation in implicit form reads 1) F t, y, y,, y n) ) = 0, F : Ω t Ω R m, where y R m, Ω t R, Ω R m n+1) Componentwise, the ODE system can be written as F 1 t, y, y,, y n) ) 0 F 2 t, y, y,, y n) ) = 0 F m t, y, y,, y n) ) 0 Date: Today is October 18, 2012 1

2 MIDTERM REVIEW AND SAMPLE EXAM Equivalently, in explicit form the system y n) = ft, y, y,, y n 1) ) can be written in its component form as y n) 1 f 1 t, y, y,, y n 1) ) y n) 2) 2 = f 2 t, y, y,, y n 1) ) y m n) f m t, y, y,, y n 1) ) where now f : Ω t Ω R m with y R m, Ω t R, Ω R m n As an example, take m = n = 2, then ) y f1 t, y = 1, y 2, y 1, y 2 ) ) f 2 t, y 1, y 2, y 1, y 2 ) 1 y 2 Definition 11 Solutions of ODEs) The solution of a system of ODEs 2) is a function u R m such that u n) = ft, u, u,, u n 1) ) Remark 12 Initial conditions) Generally, the solution of the system 2) admits infinitely many solutions, depending on the initial conditions To obtain a unique solution, the initial conditions must be specified Definition 13 Initial Value Problem) An initial value problem IVP) is a system of ODEs 2) together with appropriate initial conditions, y i) t 0 ) = c i R m, i = 1,, n 1 11 Reduction to a first-order system Here, we show that general nth-order systems of ODEs are equivalent to first-oder systems of larger dimension This, in turn, implies that we can work with first order systems almost exclusively in analyzing and solving general nonlinear problems An nth-order system of ODEs 2) can be written as a first-order nonlinear system as follows Let z 1 = y and z i := z i 1), i = 2,, n Then, the equivalent first-order system is z 1 = z 2 z 2 = z 3 z n 1 = z n z n = ft, z 1,, z n ) Note that z i R m for i = 1,, n If we write the components of z i as z ij, j = 1,,,, m, then we can write the first-order system for z in terms of its m n components as follows: z ij = z i+1,j, i = 1,, n, j = 1,, m, z nj = f j t, z 11,, z 1m,, z n1,, z nm ), j = 1,, m

MIDTERM REVIEW AND SAMPLE EXAM 3 Example 14 Take n = 2 and m = 1, then we have the scalar second-order equation y = ft, y, y ), f : Ω t Ω R, Ω R Note that, if we set ft, y, y ) = k m 1 y, then we have the equation of a mass-spring system m 1 y + ky = 0 To write the equation as a first order system, we set z 1 = y and z 2 = z 1 = y such that z 2 = y and the second order ODE becomes z 1 = z 2 z 2 = ft, z 1, z 2 ) = k m 1 z 1 Thus, we have the equivalent linear first-order system ) ) 0 1 z z1 = Az, A = k, z = = m 1 0 z 2 y y ) Example 15 Take n = m = 2 Then, as mentioned above, we have a system of two second-order ODEs ) y f1 t, y = 1, y 2, y 1, y 2 ) ) f 2 t, y 1, y 2, y 1, y 2 ) 1 y 2 We thus let z 11 = y 1, z 12 = y 2, z 21 = y 1, and z 22 = y 2 and arrive at the first-order system z 11 = z 21, z 12 = z 22, z 21 = f 1 t, z 11, z 12, z 21, z 22 ), z 22 = f 2 t, z 11, z 12, z 21, z 22 ) An example of such a system is given by the mass spring system of ODEs from the homework: m 1 y 1 = k 1 y 1 + k 2 y 2 y 1 ), m 2 y 2 = k 2 y 2 y 1 ), or y 1 = k 1 m 1 y 1 + k 2 m 1 y 2 y 1 ), y 2 = k 2 m 2 y 2 y 1 )

4 MIDTERM REVIEW AND SAMPLE EXAM If we let m 1 = m 2 = 1, k 1 = 3, and k 2 = 2 we have giving y 1 = 5y 1 + 2y 2, y 2 = 2y 1 2y 2, f 1 t, y 1, y 2, y 1, y 2) = 5y 1 + 2y 2, f 2 t, y 1, y 2, y 1, y 2) = 2y 1 2y 2 Thus, the first-order system of ODEs for z becomes z 11 = z 21, z 12 = z 22, z 21 = 5z 11 + 2z 12, z 22 = 2z 11 2z 12 Remark 16 Reduction to first-order system) To summarize, we have shown that in general an nth-order system of ODEs is equivalent to a firstoder system of larger dimension This, in turn, implies that we can work with first order systems almost exclusively in analyzing and solving ODEs 1 While in general we are only able to solve a small class of first-order systems analytically, numerical methods have been developed for a variety of first-order systems and provide tools for treating more general nonlinear problems 12 First-order systems of ODEs: existence and uniqueness theory Before reviewing various solution techniques, we give some definitions and state a general existence and uniqueness result for first-order systems of ODEs We begin with the local existence and uniqueness result Theorem 17 Local Existence and Uniqueness) Consider a first-order IVP y 1 f 1 t, y 1,, y n ) =, yt 0 ) = y 0 f n t, y 1,, y n ) y n Assume that f i ), i = 1,, n, are continuous functions with continuous partial derivatives f i y j ), i, j = 1,, n, in some domain Ω t Ω such that t 0, y 0 ) Ω t Ω, where Ω t R and Ω R n Then, their exists a unique solution to the IVP for t 0 α < t < t 0 + α, where α > 0 1 We note that in some special cases, eg, second-order ODEs) it is preferable to work directly with the given equation As we have seen in the homework see also the sample exam) the solution to second-order linear systems with constant coefficients can be found by substituting y = xe ωt into the system of ODEs, which then reduces solving the system to solving an eigenvalue problem

MIDTERM REVIEW AND SAMPLE EXAM 5 Note that the result of the theorem is referred to as a local result since it only holds in some neighborhood about t 0 Example 18 Consider the first-order IVP y = ft, y), yt 0 ) = y 0 Then, we have the following three possibilities: The IVP has a unique solution near t 0, y 0 ) if f and f f t, and y are continuous near t 0, y 0 ) As an example, consider the equation y = y 2, y 0 = 1, which has the unique solution y = 1 t 1 for t < 1, but blows up as t 1 The IVP has a solution but it is not unique if only f is continuous near t 0, y 0 ) and not f f t, and y As an example, consider the equation y = y, y 0 = 0, which has the solutions y = 0 and y = 1 4 t2 The IVP does not have a solution As an example, consider the equation y + y = 0, y0) = c 121 Linear first-order systems of ODEs Linear first-order systems are an important class of ODEs since they are well understood theoretically Moreover, the established techniques that have been developed for first-order linear problems can be used to analyze and solve more general nonlinear and higher-order problems Definition 19 Linear ODE systems) A first-order system of ODEs is linear if it can be written as y = At)y + gt), where a ij = a ij t) Equivalently, we can write this system as y 1 = a 11 t)y 1 + + a 1n t)y n + g 1 t) y n = a n1 t)y 1 + + a nn t)y n + g n t) Recall that, the system is homogenous if gt) = 0 Definition 110 General Solution of nonhomogenous problem) The general solution to a nonhomogenous system of linear ODEs is y = y h) + y p), where y h) is a solution to the homogenous problem and y p) is any solution to the nonhomogenous problem

6 MIDTERM REVIEW AND SAMPLE EXAM Remark 111 Principle of Superposition) The set of all solutions to the homogenous system, y Ay = 0, forms a vector space, which yields the so-called principle of superposition That is, 1) y = 0 is a solution to the homogenous system and 2) if y 1), y 2) are solutions to the homogenous system, then αy 1) + βy 2) ) Aαy 1) + βy 2) ) = αy 1) ) Ay 1) ) + βy 2) ) Ay 2) ) = α0 + β0 = 0, where α, β R are arbitrary constants Thus, applying this result n 1 times gives the principle of superposition: if y 1),, y n) are solutions to the linear homogenous system, then so is y = c 1 y 1) + c 2 y 2) + + c n y n), where c 1,, c n are arbitrary constants Definition 112 General Solution of homogenous problem) The general solution to a homogenous system of linear ODEs is y = c 1 y 1) + c 2 y 2) + + c n y n), where y 1),, y n) form a basis fundamental solution set) of the system Definition 113 Linear independence) The solutions functions of t) y 1),, y n) are linearly independent if and only if y 1) 1 t) y2) 1 t) yn) 1 t) y 1) W t) = det 2 t) y2) 2 t) yn) 2 t) 0, y n 1) t) y n 2) t) y n n) t) for some t I Example 114 For a 2 2 system W t) = det y 1) 1 y 1) 2 t) y2) ) 1 t) t) y2) 2 t) which is the same as the Wronskian for a second-order linear ODE since, as seen in Example 14, this equation can be reduced to a 2 2 system This follows since the solutions to the 2 2 system are y 1) = y and y 2) = y Remark 115 Given the superposition principle for homogenous problems, it follows that y = y h) + y p) satisfies the nonhomogenous problem: y h) + y p) ) Ay h) + y p) ) = y h) Ay h) + y p) Ay p) = 0 + g

MIDTERM REVIEW AND SAMPLE EXAM 7 Remark 116 Existence and Uniqueness: Linear ODE Systems) The existence and uniqueness theorem simplifies in the case of a first-order linear system of ODEs Here we note that f i y j = a ij t), i, j = 1,, n The theorem thus reduces to the following result Theorem 117 Let a ij t), i, j = 1,, n, be continuous functions of t on some interval I = α, β), with t 0 I Then there exists a unique solution to the IVP on the interval I Example 118 Consider the scalar fist-order linear IVP: y + pt)y = qt), yt 0 ) = c 0 By the theorem, if pt) and qt) are continuous on an interval I containing t 0, then, there exists a unique solution to the IVP on I Example 119 Consider the scalar second-order linear IVP: y + pt)y + qt)y = gt), yt 0 ) = c 1, y t 0 ) = c 2 This problem can be reduced to a first-order system and thus by the theorem, if pt), qt), and gt) are continuous on an interval I containing t 0, then, there exists a unique solution to the IVP on I Remark 120 Linear ODE) There are various interpretations of the equation y + pt)y = qt), yt 0 ) = y 0 that lead to some additional useful insights: Physically, we can interpret the equation as prescribing the velocity of a point particle moving on a line in time, t Geometrically, we can interpret the equation as specifying the slope of the graph of a function yt) It the slope is plotted pointwise as a vector field direction field), then the solution curves must be tangent to the direction field Note that the slope is constant along curves ft, y) = c, called the isoclines We can write the solution to the equation explicitly as: y = y h) + y p), where y h) solves the homogenous problem y + pt)y = 0 and y p) is any solution to the nonhomogenous problem y + pt)y = qt)

8 MIDTERM REVIEW AND SAMPLE EXAM This follows because the equation is linear: y + pt)y = y h) + y p) ) + pt)y h) + y p) ) = y h) ) + pt)y h) + y p) ) + pt)y p) = 0 + qt) We solve the homogenous problem using separation of variables, which has the general solution y h) t) = ce pt)dt To find a particular solution of the nonhomogenous equation we use an integrating factor µ = e pt) and obtain y p) = pt) e qt)dt Thus, the general solution of the nonhomogenous problem is y = y h) + y p) = pt)dt ce + pt) e qt)dt 13 Autonomous first-order systems: critical points, linearization, and stability analysis An autonomous first-order system of nonlinear ODEs can be linearized using Taylor expansion and under appropriate assumptions the type and stability of the critical points of the nonlinear system can be analyzed using the resulting linear system The following discussion summarizes this result Definition 121 Autonomous first-order system) An autonomous nonlinear first-order system is given by y = fy), where the right hand side f does not depend explicitly on t 131 Critical points via the Phase Plane method and linearization Definition 122 Critical points) The critical points, y c, of the autonomous nonlinear system y = fy) are points for which f is undefined or that satisfy fy c ) = 0 As shown in the sample exam, we can assume that y c = 0 Applying Taylor expansion near the critical point we have f 1 f 1 y 1 y n fy) = f0) + J f 0)y 0) + HOT, J f 0) = f n y 1 f n y n Now, since f0) = 0, if we let A = J f 0) and let hy) = HOT, then we can write the autonomous system as y = fy) = Ay + hy) 0

MIDTERM REVIEW AND SAMPLE EXAM 9 If we drop the function hy) we obtain the linearized system such that near the origin y Ay We have the following result concerning the use of this approximation Theorem 123 Linearization) Consider the autonomous first-order system y = fy) If f i, i = 1,, n, are continuous and have continuous partial derivatives in a neighborhood of the critical point, y c, and det A 0, then the kind and stability of the critical points of the nonlinear system are the same as the system y = Ay obtained by linearization We note that exceptions occur when the eigenvalues of A are purely imaginary This result requires analysis of the critical points of the linearized system y = Ay, which we review next for n = 2 We note that similar analysis can be conducted for general n n systems Remark 124 We note that in our analysis we use the Phase Plane method, in which we consider the components of the solution y 1 t) and y 2 t) as defining parametric curves in the y 1 y 2 -plane the phase plane) If we plot all such trajectories for a given ODE system, then we obtain the phase portrait Note that y 1 = y 2 = 0 is a critical point of the system since the slope of the trajectory at the critical point is undefined: dy 2 dy 1 = y 2 y 1 = a 21y 1 + a 22 y 2 a 11 y 1 + a 12 y 2 = 0 0 132 Classification of critical points To determine the type of each critical point we compute the eigenpairs λ, x) of A to find the general solution to the homogenous system y = Ay and then study the behavior as t ± There are a total of 4 cases to consider Examples of finding the solution to such systems for the case of a center and saddle point are provided in the sample exam 1) Node: λ 1, λ 2 R and λ 1 λ 2 > 0 We call the node proper if all the trajectoreis have a distinct tangent at the origin In this case we have λ 1 = λ 2 The node is improper if all trajectories have same tangent at the origin, except for two of them In this case, λ 1 λ 2 The node is degenerate if A has only a single eigenvector In this case, we solve for the first eigenvector x and then solve for the generalized eigenvector u, by solving the system A λi)u = x Note that, the eigenvectors are linearly independent provided A is symmetric or skew symmetric 2) Saddle point: λ 1, λ 2 R and λ 1 λ 2 < 0 In this case, we have two incoming and two outgoing trajectories, all others miss the origin 3) Center: λ 1, λ 2 C and λ 1 = iµ, λ 2 = iµ The trajectories are closed around the origin 4) Spiral: λ 1, λ 2 C and realλ i ) 0, i = 1, 2 Here the trajectories spiral to or from the origin

10 MIDTERM REVIEW AND SAMPLE EXAM Remark 125 The eigenvalues are the roots of the characteristic polynomial of ) a11 a A = 12 a 21 a 22 That is, the eigenvalues λ, satisfy ) a11 λ a deta λi) = det 12 a 22 λ a 21 = a 11 λ)a 22 λ) a 21 a 12 = λ 2 tracea)λ + deta) = 0, where tracea) = a 11 + a 22 and deta) = a 11 a 22 a 21 a 12 Now, the roots λ 1 and λ 2 satisfy λ λ 1 )λ λ 2 ) = λ 2 λ 1 + λ 2 )λ + λ 1 λ 2 = 0 Hence, we have that tracea) = λ 1 + λ 2 and deta) = λ 1 λ 2, where λ 1 = 1 2 p + ) and λ 2 = 1 2 p ), with p = tracea), q = deta), and The type of the critical point can be categorized according to the quantities p = tracea), q = deta) and = p 2 4q The following table summarizes this classification Type p = λ 1 + λ 2 q = λ 1λ 2 = λ 1 λ 2) 2 Eigenvlaues Node q > 0 0 Real same sign Saddle point q < 0 Real opposite sign Center p = 0 q > 0 pure imaginary Spiral p 0 < 0 Complex not pure imaginary Table 1 Eigenvalue criteria for critical points 133 Stability analysis for 2 2 autonomous systems The stability of critical fixed points of a system of constant coefficient linear autonomous differential equations of first order can be analyzed using the eigenvalues of the corresponding matrix A Definition 126 The autonomous system has a constant solution, an equilibrium point of the corresponding dynamical system This solution is 1) asymptotically stable as t, in the future ) if and only if for all eigenvalues λ of A, realλ) < 0 2) asymptotically stable as t in the past ) if and only if for all eigenvalues λ of A, realλ) > 0

MIDTERM REVIEW AND SAMPLE EXAM 11 Figure 1 Classification of equilibrium points of a linear autonomous system These profiles also arise for non-linear autonomous systems in linearized approximations 3) unstable if there exists an eigenvalue λ of A with realλ) > 0 for t The stability of a critical point can also be categorized according to the values of p = tracea), q = deta), and = p 2 4q The following table summarizes the classification Type of Stability p = λ 1 + λ 2 q = λ 1 λ 2 Asymptotically stable p < 0 q > 0 Stable p 0 q > 0 Unstable p > 0 or q < 0 Table 2 Criteria for stability

12 MIDTERM REVIEW AND SAMPLE EXAM 14 Solving ODEs: analytic methods and numerical methods to be covered later) Here, we describe various techniques for solving ODEs We begin by reviewing methods for solving a single ODE, or scalar differential equation We then proceed to solving systems of ODEs A list of the techniques we covered in this course is as follows: Linear first-order ODE: separation of variables Non-linear first-order ODE: exact equations and integrating factors, linearization, and reduction to linear form Linear first-order constant coefficient systems) of ODEs): the general solution of the nonhomogenous problems, the homogenous solution and the eigenproblem, and the particular solution and the methods of undetermined coefficients and variation of parameters Numerical methods for solving ODEs: Euler s method as a simple example 141 Non-linear first-order ODE Some nonlinear ODEs can be reduced to a linear ODE, for example, the first-order Bernoulli equation y + pt)y = gt)y α, α R If we take ut) = [yt)] 1 α, then u t) = 1 α)yt) α y Substituting into the ODE gives or u t) = 1 α)yt) α y = 1 α)yt) α gy α py) = 1 α)g pu) u + 1 α)pu = 1 α)g, which is a first-order linear system for u An important example of the Bernoulli equation results when we set α = 2, pt) = A, and gt) = B in which case we have The equation for u is then which has solution implying y = Ay By 2 u t) + Au = B, ut) = ce At + B/A, y = 1 u = 1 ce At + B/A

MIDTERM REVIEW AND SAMPLE EXAM 13 142 Linear second-order ODE: the general solution Here, we consider the linear second order IVP y + pt)y + qt)y = g, yt 0 ) = c 1, y t 0 ) = c 2 We assume that p, q, g are continuous functions on some interval I containing t 0 such that there exists a unique solution y Remark 127 The solution set of the homogenous problem forms a vector space, that is, 1) y = 0 is a solution and, 2) if α, β R and x and y are solutions to the homogenous problem, then αx + βy is a solution: αx + βy) + pαx + βy) + qαx + βy) = αx + px + qx) + βy + py + qy) = α 0 + β 0 = 0 Remark 128 General solution) All solutions of the homogenous problem can be written as y = c 1 y 1) + c 2 y 2), where c 1, c 2 R are arbitrary constants that are uniquely determined by the initial conditions, provided that the solutions y 1) and y 2) form a basis or fundamental system This, in turn, holds true if and only if y 1) and y 2) are linearly independent, or the Wronskian W t) = y 2) t)) y 1) t) y 2) t)y 1) t)) 0 We note that it is sufficient to check that this condition holds for any value of t Example 129 Constant coefficients) Consider the case of a homogenous second-order linear constant coefficient ODE: y + ay + by = 0 Then, we derive the solution by setting y = e λt into the ODE, which gives after canceling the common exponential term) the characteristic quadratic) polynomial: λ 2 + aλ + b = 0, whose two roots, λ 1, λ 2 give us a solution of the form: y = c 1 e λ 1t + c 2 e λ 2t There are three possible cases for the roots λ 1 = a 2 + a 2 4b2, λ 2 = a a 2 2 4b 2 1) Two distinct real roots: λ 1 λ 2 R In this case, the solution is y = c 1 e λ 1t + c 2 e λ 2t 2) Double real roots: λ 1 = λ 2 = a 2 R In this case, the solution is y = c 1 e a 2 t + c 2 te a 2 t

14 MIDTERM REVIEW AND SAMPLE EXAM 3) Two complex conjugate roots: λ 1 = a 2 + iµ, λ 2 = a 2 iµ In this case, µ = 4b a 2 > 0 and the solution is y = e a 2 t c 1 cosµt) + c 2 sinµt)) Here, Euler s formula was used: e a+ib = e a e ib = e a cos b + i sin b) Example 130 Euler-Cauchy equation) Another important second-order linear ODE is the Euler-Cauchy equation: t 2 y + aty + by = 0 This system can be reduces to a constant coefficient problem in x by substituting y = t r, since then t = rt r 1) and t = rr 1)t r 2), implying that rr 1) + ar + b = r 2 + a 1)r + b = 0 The roots r 1 and r 2 of this quadratic polynomial give the solutions to the system: y = c 1 t r 1 + c 2 t r 2 Given the solution to the homogenous problem, one can find the general solution to the corresponding nonhomogenous ODE using various techniques, for example the methods of undetermined coefficients and variation of parameters We review these techniques for ODE systems in the next section, noting that they can also be applied in the case of a scalar equation in a similar way 143 Systems of linear constant coefficient ODEs Here, we consider solving constant coefficient linear ODE systems y = Ay + gt), A R n n, y R n As discussed in Section 12, we solve for the general solution of the nonhomogenous system y = y h) + y p), by first computing the solution for y h), the solution of the homogenous problem, and then using the methods of undetermined coefficients or variation of parameters to find y p), the particular solution Examples of how to use the latter methods to find y p) are found in the sample exam The general solution of the homogenous system is given by see, Definition 112) y = c 1 x 1) e λ 1t + c 2 x 2) e λ 2t + + c n x n) e λnt, where λ 1, λ 2,, λ n are the eigenvalues of A, ie, the roots of the characteristic polynomial deta λi) a polynomial of degree n in λ), and x 1),, x n) the corresponding eigenvectors We note that if the λ i, i = 1,, n, are distinct, then one can show that the eigenvectors are linearly independent

MIDTERM REVIEW AND SAMPLE EXAM 15 2 Linear Algebra We first state basic definitions and axioms for vectors and matrices Then, we review some related concepts from Linear Algebra and apply these ideas to the solution of linear systems 21 Vectors and Matrices We consider rectangular matrices a 11 a 1n A = R m n a m1 a mn Note that when n = 1 we obtain a column vector x 1 x = x m and when m = 1 we obtain a row vector R m y = y 1 y m ) R m The basic operations of addition and multiplication among constants α R, vectors x R n, and matrices A R m n are as follows: Addition of two matrices C = A + B R m n is defined elementwise such that c ij = a i,j + b i,j and results in the matrix a 11 a 1n b 11 b 1n a 11 + b 11 a 1n + b 1n A+B = + = a m1 a mn b m1 b mn a m1 + b m1 a mn + b mn Multiplication of a matrix, A, by a constant, α, is defined elementwise and results in the matrix: αa = [αa ij ], i = 1,, m, j = 1,,,,,, n Multiplication of a vector, x, by a matrix, A, results in the vector: n j=1 a 1jx j n j=1 Ax = a 2jx j Rm n j=1 a mjx j Multiplication of a matrix A R m n by a matrix B R n k results in the matrix n l=1 a 1lb l1 n l=1 a 1lb lk AB = R m k n l=1 a mlb l1 n l=1 a mlb lk Note that in general AB BA

16 MIDTERM REVIEW AND SAMPLE EXAM Given the matrix a 11 a 12 a 1n a 21 a 22 a 2n A =, a m1 a m2 a mn its traspose is defined as a 11 a 21 a m1 A T a 12 a 22 a m2 = a 1n a 2n a mn Note that AB) T = B T A T There are several classes of matrices that arise often in parctice: 1) A square matrix, D, is diagonal if d ij = 0 for i j: d 11 0 0 0 d D = 22 0 0 0 d nn 2) The identity matrix is a diagonal matrix where all the diagonal elements are equal to one: 1 0 0 0 1 I = 0 0 0 1 3) A square matrix is symmetric if A T = A or a ij = a ji, i, j = 1,, n 4) A square matrix is skew symmetric if A T = A or a ij = a ji, i, j = 1,, n Note that skew symmetric matrices have a zero diagonal a ii = a ii = 0, i = 1,, n 5) An upper triangular matrix, U, is defined as u ij = 0, i > j: u 11 u 12 u 1n 0 u 22 u 2n U = 0 0 u nn

MIDTERM REVIEW AND SAMPLE EXAM 17 6) A lower triangular matrix, L, is defined as l ij = 0, j > i: l 11 0 0 l L = 21 l 22 0 l n1 l n2 l nn Definition 21 The inverse of a square matrix, A, is denoted by A 1 and satisfies: A 1 A = AA 1 = I Example 22 Consider the case where n = 2 such that: ) a11 a A = 12 a 21 a 22 The inverse of A is A 1 = ) 1 a22 a 12, det A) a 21 a 11 where deta) = a 11 a 22 a 21 a 12 is the determinant of the matrix This implies that A 1 exists if and only if deta) 0 To check that this is indeed the inverse of the matrix of A we compute ) ) A 1 1 a22 a A = 12 a11 a 12 det A) a 21 a 11 a 21 a 22 ) ) 1 a11 a = 12 a22 a 12 det A) a 21 a 22 a 21 a 11 ) 1 0 = 0 1 Theorem 23 The result in the example for n = 2 also holds for general matrices A R n n, that is, A 1 exists if and only if det A) 0 22 Vector Spaces Definition 24 A vector space, V, is a mathematical structure formed by a collection of elements called vectors, which may be added together and multiplied scaled ) by numbers, called scalars Note that the elements of a vector space need not be vectors v R m, they can also be functions, matrices, etc The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below Let u, v, w V be arbitrary vectors and let α, β K be arbitrary scalars 1) Associativity of addition u + v + w) = u + v) + w 2) Commutativity of addition: u + v = v + u 3) Identity element of addition: there exists an element 0 V, called the zero vector, such that v + 0 = v for all v V

18 MIDTERM REVIEW AND SAMPLE EXAM 4) Inverse elements of addition: for every v V, there exists an element v V, called the additive inverse of v, such that v + v) = 0 5) Distributivity of scalar multiplication with respect to vector addition: αu + v) = αu + αv 6) Distributivity of scalar multiplication with respect to field addition: α + β)v = αv + βv 7) Compatibility of scalar multiplication: αβv) = αβ)v 8) Identity element of scalar multiplication: 1v = v, where 1 denotes the multiplicative identity in the field K The requirement that vector addition and scalar multiplication be external) binary operations includes by definition of binary operations) a property called closure: that u + v and αv are in V for all α K, and u, v V This follows since a binary operation on a set is a calculation involving two elements of the set called operands) and producing another element of the set, in this case V For the field K, the notion of an external binary operation is needed, which is defined as a binary function from K S to S Examples of vector spaces are V = R m and V = {px) px) = i α ix i } The latter polynomial space is infinite dimensional, whereas the former Euclidian space is finite dimensional Definition 25 A nonempty subset W of a vector space V that is closed under addition and scalar multiplication and therefore contains the 0-vector of V) is called a subspace of V Remark 26 To prove that W is a subspace, it is sufficient to prove 1) 0 W and 2) for any u, v W and any α, βk, αu + βv W Definition 27 If S = {v 1, v n } is a finite subset of elements of a vector space V, then the span is spans) = { n u u = α i v i, α i K } i=1 The span of S may also be defined as the set of all linear combinations of the elements of S, which follows from the above definition Definition 28 A basis B of a vector space V over a field K is a linearly independent subset of V that spans V In more detail, suppose that B = {v 1,, v n } is a finite subset of a vector space V over a field K Then, B is a basis if it satisfies the following conditions: 1) The linear independence property: if α 1 v 1 + + α n v n = 0, then α 1 = = α n = 0; and 2) The spanning property: for every v V it is possible to choose α 1,, α n K such that v = α 1 v 1 + + α n v n

MIDTERM REVIEW AND SAMPLE EXAM 19 Definition 29 The dimension of a vector space V is dim V = B, where B is a basis for V Remark 210 We proved the following useful results in class: 1) The coefficients α i are called the coordinates of the vector v with respect to the basis B, and by the first property they are uniquely determined 2) Given a vector space V with dim V = n, any linearly independent set of n vectors forms a basis of V and any collection of n + 1 vectors is linearly dependent Thus, the dimension is the maximum number of linearly independent vectors in V The above results all follow from the following basic result Lemma 211 Let V be a vector space Assume that the set of vectors V = {v 1,, v n } spans V and that the set of vectors W = {w 1,, w m } is linear independent Then, m n and a set of the form spans V {w 1,, w m, v i1,, v in m } Proof Assume v i 0 for some i so that V {0}, else w i V can not be linearly independent for any m 1 Since V spans V it follows that w 1 = i α iv i and since w 1 0, we have that α j 0 for some j Thus v j = 1 α j w1 i j α i v i ), implying that the set {w 1, v 1,, v j 1, v j+1,, v n } spans V Repeating this argument, since this updated set spans V and thus w 2 = βw 1 + i j α i v i 0, {w 1, w 2, v 1,, v j 1, v j+1,, v n } must be linearly dependent Now, since w 1 and w 2 are linearly independent, it must be that α k 0 for some k j else w 2 = βw 1 ) Thus, the set {w 1, w 2, v 1,, v j 1, v j+1,, v k 1, v k+1,, v n } spans V Repeating the same argument another m-2 times gives that {w 1,, w m, v i1,, v in m } spans V Next, assume m > n Then after n steps {w 1,, w n }

20 MIDTERM REVIEW AND SAMPLE EXAM spans V and after n + 1 steps {w 1,, w n, w n+1 } is linearly dependent, a contradiction Thus, m n 23 Linear systems of equations Here, we consider the matrix equation Ax = b, A R m n, b R m, and x R n, which represents a system of m linear equations with n unknowns: a 1,1 x 1 + a 1,2 x 2 + + a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x 2 + + a 2,n x n = b 2, a m,1 x 1 + a m,2 x 2 + + a m,n x n = b n, where, x 1,, x n are the unknowns, a i,j, 1 i, j n are given coefficients and b 1,, b n are constants, imposing constraints that these equations must satisfy Note that the unknowns appear in each of the equations and as such we must find their values such that all equations are satisfied simultaneously There are n columns of A vectors) denoted as and m row vectors a j) = a 1j a 2j a mj, j = 1,, n, a i) = a i1 a i2 a in ), i = 1,, m Definition 212 The column space, a vector space, is defined as colspa) = spana 1),, a n) and the row space, also a vector space, is defined as rowspa) = spana 1),, a m) Definition 213 The maximum number of linearly independent rows in a matrix A is called the row rank of A, which is equal to the maximum number of linearly independent columns in A, referred to ass the column rank Since the two are equal, we will not distinguish between them and write ranka) to denote both Remark 214 Note that from our discussion of vector spaces we have that ranka) = dim colspa) = dim rowspa) Finally, we proved the following existence and uniqueness theorem for this linear system Theorem 215 Consider the linear system Ax = b, A R m n, b R m, and x R n and let à = [Ab] Then, the system 1) is consistent the solution exists) if rankã) = ranka); 2) has a unique solution if rankã) = ranka) = n;

MIDTERM REVIEW AND SAMPLE EXAM 21 3) has infinitely many solutions if rankã) = ranka) < n Proof The first two parts are proved in the sample exam Thus, we prove only the last statement here Assume rankã) = ranka) = r < n Then note that n b = x i a i), i=1 where there are r linearly independent columns in A and then n r which are linearly combinations of these r columns We reorder the columns of the matrix A to obtain Â, which has as its first r columns those columns from A that are linearly independent: r n b = ˆx i â i) + ˆx i â i) i=1 i=r+1 Note that these first r columns form a basis, B = {â 1),, â r) } for colspa) = colspâ) Now, since the last n r columns of  can be written as linear combinations of elements of B, we can write them as follows: r a i) = α ij â j), i = r + 1,, n, which gives the system b = j=1 r ˆx i â i) + i=1 n i=r+1 ˆx i r α ij â j) j=1 Collecting terms, we arrive at a reduced system: r b = y i â i), where i=1 y i = ˆx i + β i, β i = n l=r+1 ˆx l α li Now, by the second result of the theorem, ie, 2), the y i, i = 1,, r are uniquely determined Thus, once we fix the values of x i, i = r + 1,, n, then we can solve for the values of x i, i = 1,, r Example 216 Consider the matrix A = 1 2 0 1 2 1 0 0 0 and the right hand side b = b 1 b 2 b 3

22 MIDTERM REVIEW AND SAMPLE EXAM Note that ranka) = 2 and in order for a solution to exist we must have that b 3 = 0, such that rankã) = 2 Now, we show that there exist infinitely many solutions to Ax = b Our approach follows the proof above Let  = 1 0 2 1 1 2 0 0 0 Then, b = ˆx 1 1 1 + ˆx 2 0 1 + ˆx 3 2 2 0 0 0 = ˆx 1 1 1 + ˆx 2 0 1 + 2ˆx 3 1 1 0 0 0 = ˆx 1 + 2ˆx 3 ) 1 1 + ˆx 2 0 1 0 0 = y 1 1 1 + y 2 0 1, 0 0 with y 1 = ˆx 1 + 2ˆx 3 and y 2 = ˆx 2 Now, the solution to the system in y is y 1 = b 1 and y 2 = b 2 b 1 Thus, b 1 = ˆx 1 + 2ˆx 3 and ˆx 2 = b 2 b 1 Thus, ˆx 1 = b 1 2ˆx 3 ˆx 2 = b 2 b 1 ˆx 3 R Note that ˆx 3 is a free variable and can be chosen arbitrarily

MIDTERM REVIEW AND SAMPLE EXAM 23 3 Midterm Sample Exam 1) 41 #14) Undamped motions of an elastic spring are governed by the equation my + ky = 0 or my = ky, where m is the mass, k the spring constant, and yt) the displacement of the mass from its equilibrium position Modeling the masses on the two springs we obtain the following system of ODEs: m 1 y 1 = k 1 y 1 + k 2 y 2 y 1 ) m 2 y 2 = k 2 y 2 y 1 ) for unknown displacements y 1 t) of the first mass m 1 and y 2 t) of the second mass m 2 The forces acting on the first mass give the first equation and the forces acting on the second mass give the second ODE Let m 1 = m 2 = 1, k 1 = 3, and k 2 = 2 which gives the system ) y y = = 1 y 2 5 2 2 2 ) y1 y 2 ) Solve the equation by substituting the function y = xe ωt into the ODE Solution y = ω 2 xe ωt = Axe ωt Setting ω 2 = λ and dividing by e ωt gives the eigenproblem Ax = λx, which we solve for x and λ to find the solution Now, deta λi) = 5 λ) 2 λ) 4 = λ 2 + 7λ + 6 = λ + 1)λ + 6) = 0 Thus, the eigenvalues are λ 1 = 1 and λ 2 = 6 The eigenvector for λ 1 = 1 is obtained by solving ) ) 4 2 x1 0 = 2 1 x 2 0) 1 Which gives the eigenvector x 1) = Similarly, the eigenvector 2) 2 for λ 2 = 6 is x 2) = 1) Notice that ω = ± λ, 1 = i and 6 = i 6 Thus, y = x 1) c 1 e it c 2 e it) + x 2) c 3 e i 6t + c 4 e i 6t ) Now, using Euler s formula e it = cost) + i sint) it follows that y = a 1 x 1) cost) + b 1 x 1) sint) + a 2 x 2) cos 6t) + b 2 x 2) sin 6t),

24 MIDTERM REVIEW AND SAMPLE EXAM where a 1 = c 1 + c 2, b 1 = ic 1 c 2 ), a 2 = c 3 + c 4, b 2 = ic 3 c 4 ) These four arbitrary constants are specified by the initial conditions Remarks: in components, the solution reads: y 1 = a 1 cost) + b 1 sint) + 2a 2 cos 6t) + 2b 2 x 2) sin 6t), y 2 = 2a 1 cost) + 2b 1 sint) a 2 cos 6t) b 2 x 2) sin 6t) The first two terms in y 1 and y 2 give a slow harmonic motion, and the last two a fast motion The slow motion occurs if both masses are moving in the same direction, for example, if a 1 = 1 and the other three constants are zero The fast motion occurs if at each instant the two masses are moving in opposite directions, so that one spring is extended and the other compressed For example if a 2 = 1 and the other constants are zero Depending on the initial conditions, one or the other of these motions, or superposition of both of them will result 2) Review # 28) Find the location of the critical points of the system: y 1 = cosy 2 ) y 2 = 3y 1 Can the type of the critical points be determined from the linearized system? Be sure to justify this claim If your answer is yes, then find the type of the critical points Solution The critical points are given by y 1, y 2 ) such that f 1 y 2 ) = cosy 2 ) = f 2 y 1 ) = 3y 1 = 0 Thus, their are infinitely many critical points given by 0, 2n + 1 ) ) π, 2 where n is any integer The transformation y ỹ that maps the critical points to the origin is given by ỹ 1 = y 1 and ỹ 2 = y 2 2n + 1) π 2 Thus, we obtain a system in ỹ by substituting y 1 = ỹ 1 and y 2 = ỹ 2 + 2n + 1) π 2 into the system for y Letting n = 0 we have the critical point 0, π 2 ), for which the system in ỹ reads ỹ 1 = cosỹ 2 + π 2 ) = sinỹ 2) ỹ 2 = 3ỹ 1

MIDTERM REVIEW AND SAMPLE EXAM 25 To determine the type of this critical point we linearize the system: ỹ J f 0)ỹ, where J f 0) is the Jacobian matrix ) ) 0 cos0) 0 1 J f 0) = = 3 0 3 0 Note that f 1 ỹ 2 ) = sinỹ 2 ) C 1, f 2 ỹ 1 ) = 3ỹ 1 C 1 and det J f 0) 0 such that the type and stability of the critical points of the nonlinear system coincide with the linearized system Now, the eigenvalues satisfy λ 2 + 3 = 0, so that λ ± = ±i 3 Thus, p = tracej f 0)) = λ + + λ = 0 and detj f 0)) = λ + λ = 3 and, thus, the critical point is a center Note that by periodicity of cos), all the critical points 0, 4n + 1) π 2 ) are centers as well, since in such case cos ỹ 2 + 4n + 1 ) ) π 2 = cos ỹ 2 + 2nπ + π ) = cos ỹ 2 + π ) 2 2 Next, consider n = 1 such that the critical point is 0, π 2 ) Then, implying ỹ 1 = cosỹ 2 π 2 ) = sinỹ 2) ỹ 2 = 3ỹ 1 J f 0) = ) 0 cos0) = 3 0 ) 0 1 3 0 Here, the eigenvalues satisfy λ 2 3 = 0 and, thus, λ ± = ± 3 This gives p = 0 and q = 3 which implies the critical point is a saddle point Note that due to periodicity, all of the critical points of the form 0, 4n 1) π 2 ) are saddle points 3) Consider the nonhomogenous system of ODEs: y t) = Ayt) + gt), A = ) 3 1, gt) = 1 3 ) 6 e 2 2t Show that a unique solution exists for initial conditions y0) = s Then compute the solution Solution To show that a unique solution exists we consider the right hand side of the equaiton: 3y1 + y ft, y 1, y 2 ) = 2 6e 2t ) y 1 3y 2 + 2e 2t Since the equation is a constant coefficient linear system, it is sufficient to note that the exponential terms corresponding to the rhs, e 2t, are continuously differentiable at t = 0 actually for any t)

26 MIDTERM REVIEW AND SAMPLE EXAM Next, we compute the solution to the homogenous system gt) = 0 The eigenvalues satisfy λ + 2)λ + 4) = 0, and so λ 1 = 2 and λ 2 = 4 The corresponding eigenvectors are x 1) 1 = and x 1) 2) 1 =, 1) giving the homogenous solution ) ) y h) 1 = c 1 e 1 2t 1 + c 2 e 1 4t Now, to find a particular solution, we can use 1) the method of undetermined coefficients or 2) variation of parameters For the undetermined coefficients approach, we consider the particular solution y p) = ute 2t + ve 2t, since the e 2t in gt) is a solution to the homogenous problem Our task is then to determine the coefficients u and v This can be done by plugging y p) into the system of ODEs: ) y p) ) = ue 2t 2ute 2t 2ve 2t = Aute 2t + Ave 2t 6 + e 2 2t Now, equating coefficients of the te 2t terms gives Au = 2u Thus, u is a solution to the homogenous system ) ) 3 + 2 1 u1 0 = 1 3 + 2 u 2 0) Thus, u 1 = u 2 = c, for any c R of the e 2t terms gives u 2v = Av + w, u = or c ) c c c), w = ) ) ) v1 3 1 v1 2 = + v 2 1 3 v 2 This gives the system ) ) 1 1 v1 = 1 1 v 2 ) 6 + c, 2 + c ) 6 2 ) 6 2 which is consistent if c = 2, since then, the right hand side is an element of the column space of the given matrix: ) 4 1 span 4) 1

MIDTERM REVIEW AND SAMPLE EXAM 27 Note that since ranka) < n = 2, there exist infinitely many solutions to this linear system: ) ) ) 4 1 1 1 = v 4) 1 + v 1 2 = v 1 1 v 2 ), 1 implying v 2 v 1 = 4 or v 2 = v 1 +4, where v 1 is the free variable Finally, plugging u and v into the definition of y p) and adding this to y h) gives the general solution to the nonhomogenous problem: ) ) y = y h) + y p) 1 = c 1 e 1 2t 1 + c 2 e 1 4t + 2 2 ) te 2t + v1 v 1 + 4 ) e 2t Note that the solution is valid for any v 1 R Once v 1 has been selected, the constants c 1 and c 2 are uniquely determined by the initial conditions, y 1 t 0 ) = s 1 and y 2 t 0 ) = s 2 Next, given a solution to the homogenous system, we find a particular solution using the method of variation of parameters This approach is easy to understand if we write the solution of the homogenous system, y h), solving y = Ay + g, g 0, in terms of the fundamental matrix Let, y 1) 1 = e 1) 2t, y 2) 1 = e 1) 4t Then, where Y = y 1) y 2)) = y h) t) = Y t)c, e 2t e 4t ) e 2t e 4t and c = c1 c 2 ) The idea is then to write the particular solution in the form y p) = Y t)ut) and plug this into the nonhomogenous system to find ut) Substituting into the equation gives Y u + Y u = AY u + g Noting that Y = AY, it follows that Y u = AY u, which gives where Thus, Y 1 = u = 1 2 Y u = g u = Y 1 g, 1 e 4t e 4t ) 2e 6t e 2t e 2t = 1 e 2t e 2t ) 2 e 4t e 4t e 2t e 2t e 4t e 4t ) 6e 2t 2e 2t ) = 2 4e 2t )

28 MIDTERM REVIEW AND SAMPLE EXAM Integrating we get and y p) e 2t e = Y t)ut) = 4t e 2t ut) = ) 2t 2e 2t ) ) 2t e 4t 2e 2t = ) 2t 2 e 2t + 2 2t = for v 1 = 2 Linear Algebra 4) Show that the set of all 3 3 skew symmetric matrices, ie, V = {A R 3 3 such that A T = A}, is a vector space and find the dimension and a basis for V Solution Note that if A R 3 3 such that A T = A, then we can write A as A = 0 a 1 a 2 a 1 0 a 3, a 2 a 3 0 so that at most three entries of the nine total entries in the matrix A can be chosen independently Now, taking B = 0 b 1 b 2 b 1 0 b 3, b 2 b 3 0 we have 0 αa 1 + βb 1 αa 2 + βb 2 αa + βb = αa 1 + βb 1 ) 0 αa 3 + βb 3, αa 2 + βb 2 ) αa 3 + βb 3 ) 0 which is again a skew symmetric matrix Moreover the 3 3 zero matrix is skew symmetric, since then a ij = a ji, as required Thus, V is a vector space A basis of V is 0 1 0 1 0 0, 0 0 0 0 0 1 0 0 0, 1 0 0 and so that dim V = 3 5) Consider the linear system of equations 0 0 0 0 0 1, 0 1 0 Ax = b, A R m n, x R n, and b R m Define the augmented matrix à = [ A ] b R m n+1) a) Show that the linear system is consistent iff rankã) = ranka) b) Show that the system has a unique solution iff this common rank is equal to n ) ) 2 te 2 2t v1 + e v 1 + 4 2t

MIDTERM REVIEW AND SAMPLE EXAM 29 Solution To prove a), note that b = n i=1 x ia i) and, also that dim colspa) = ranka), implying b colspa) iff rankã) = ranka) Next, to prove b) we assume that there exist two solutions, say x, y R n such that Ax = Ay = b Then, n Ax Ay = Ax y) = x i y i )a i) = 0 i=1 Now, since ranka) = n, it follows that the columns of A are linearly independent, implying x i y i = 0, for i = 1,, n, or x = y In the other direction, we assume that Ax = b has a unique solution and then pick a particular right hand side b = 0 Then, x = 0 is a solution and by assumption it is unique This, then implies that n Ax = x i a i) = 0 i=1 has only the trivial solution x = 0 Thus, the columns of a i), i = 1,, n, are linearly independent and ranka) = n By our assumption that a solution exists and part a) it follows that rankã) = ranka)