b) The system of ODE s d x = v(x) in U. (2) dt

Similar documents
How to solve quasi linear first order PDE. v(u, x) Du(x) = f(u, x) on U IR n, (1)

The first order quasi-linear PDEs

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Taylor Series and stationary points

Lecture 13 - Wednesday April 29th

( ) and then eliminate t to get y(x) or x(y). Not that the

Module 2: First-Order Partial Differential Equations

Existence Theory: Green s Functions

1 Functions of Several Variables Some Examples Level Curves / Contours Functions of More Variables... 6

4 Introduction to First-Order Partial Differential

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

MATH20411 PDEs and Vector Calculus B

MATH 220: PROBLEM SET 1, SOLUTIONS DUE FRIDAY, OCTOBER 2, 2015

Math 212-Lecture 8. The chain rule with one independent variable

Lagrange Multipliers

MATH 173: PRACTICE MIDTERM SOLUTIONS

Analysis II: The Implicit and Inverse Function Theorems

Tangent spaces, normals and extrema

THE JORDAN-BROUWER SEPARATION THEOREM

Existence and uniqueness: Picard s theorem

MA 201: Partial Differential Equations Lecture - 2

Homogeneous Linear Systems and Their General Solutions

f(s)ds, i.e. to write down the general solution in

VANDERBILT UNIVERSITY. MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions

SPRING 2008: POLYNOMIAL IMAGES OF CIRCLES

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp(

An Introduction to Partial Differential Equations

Implicit Functions, Curves and Surfaces

MTHE 227 Problem Set 2 Solutions

Additional Homework Problems

Picard s Existence and Uniqueness Theorem

Chap. 1. Some Differential Geometric Tools

MATH 425, FINAL EXAM SOLUTIONS

Math 201 Handout 1. Rich Schwartz. September 6, 2006

On Cauchy s theorem and Green s theorem

CALCULUS III THE CHAIN RULE, DIRECTIONAL DERIVATIVES, AND GRADIENT

The linear equations can be classificed into the following cases, from easier to more difficult: 1. Linear: u y. u x

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

Section 3.5 The Implicit Function Theorem

α u x α1 1 xα2 2 x αn n

Lecture Notes on Metric Spaces

Section 14.1 Vector Functions and Space Curves

Existence and Uniqueness

Convex Analysis Background

DEVELOPMENT OF MORSE THEORY

Vector Calculus, Maths II

The Calculus of Vec- tors

Introduction to Algebraic and Geometric Topology Week 14

Lecture notes: Introduction to Partial Differential Equations

Solving First Order PDEs

Unit IV Derivatives 20 Hours Finish by Christmas

Unit IV Derivatives 20 Hours Finish by Christmas

Section 4.3 Vector Fields

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Math 311, Partial Differential Equations, Winter 2015, Midterm

2.3. The heat equation. A function u(x, t) solves the heat equation if (with σ>0)

Before you begin read these instructions carefully.

Math 147, Homework 1 Solutions Due: April 10, 2012

Math Ordinary Differential Equations

1 Functions of many variables.

Section 5. Graphing Systems

MATH 131P: PRACTICE FINAL SOLUTIONS DECEMBER 12, 2012

21-256: Partial differentiation

The Liapunov Method for Determining Stability (DRAFT)

Lecture 6: Contraction mapping, inverse and implicit function theorems

Lagrange Multipliers

Course Summary Math 211

Sec. 1.1: Basics of Vectors

Now I switch to nonlinear systems. In this chapter the main object of study will be

Lecture two. January 17, 2019

Chapter 3 Second Order Linear Equations

4 Partial Differentiation

2. Complex Analytic Functions

Dr. Allen Back. Sep. 10, 2014

Ordinary Differential Equation Theory

First order Partial Differential equations

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS I: Introduction and Analytic Methods David Levermore Department of Mathematics University of Maryland

ORDINARY DIFFERENTIAL EQUATIONS: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

4.1 Analysis of functions I: Increase, decrease and concavity

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3

Math 4263 Homework Set 1

Partial Differential Equations Separation of Variables. 1 Partial Differential Equations and Operators

Ideas from Vector Calculus Kurt Bryan

Chapter 0 of Calculus ++, Differential calculus with several variables

Chapter 2. First-Order Partial Differential Equations. Prof. D. C. Sanyal

(1 + 2y)y = x. ( x. The right-hand side is a standard integral, so in the end we have the implicit solution. y(x) + y 2 (x) = x2 2 +C.

Topological vectorspaces

Multivariable Calculus Notes. Faraad Armwood. Fall: Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates

Definition 3 (Continuity). A function f is continuous at c if lim x c f(x) = f(c).

Math 118, Fall 2014 Final Exam

CITY UNIVERSITY LONDON. BEng (Hons) in Electrical and Electronic Engineering PART 2 EXAMINATION. ENGINEERING MATHEMATICS 2 (resit) EX2003

Math 40510, Algebraic Geometry

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

Week 4: Differentiation for Functions of Several Variables

Differential Forms. Introduction

0.1 Diffeomorphisms. 0.2 The differential

Transcription:

How to solve linear and quasilinear first order partial differential equations This text contains a sketch about how to solve linear and quasilinear first order PDEs and should prepare you for the general case of first order PDEs. Notation: We use interchangeably the notation and D for the gradient. Given an open set U IR n and let v(x) = v 1 (x),..., v n (x) be a smooth vector field defined on U. With this vector field there are two problems associated. a) The PDE v(x) Du(x) = 0 in U, (1) and b) The system of ODE s d x = v(x) in U. (2) dt The philosophy is to reduce the problem (1) to the problem (2). Recall that under the stated conditions there exists always a unique solution for the initial value problem given by (2). I.e., for any given x 0 U there exists a unique function x(t) satisfying (2) and x(0) = x 0. Mind you that the solution may not exist for arbitrary times, but that is of no concern at this moment. It is convenient to introduce the notion of a flow Ψ(t, y). This map is defined by y x(t) where x(t) is the solution of (2) with initial condition y. In general problem (2) is very difficult to solve but the point of these notes is to convince you that there is really no difference between (1) and (2). If you can solve (2) you can solve (1) and conversely. The following fact already points in that direction. First we need the following definition. Definition 1: First Integral A first integral of (2) is a function f, differentiable in U, that stays constant along any solution of the system (2). Lemma 1: First Integrals and Solutions are the same Any first integral of (2) is a solution of (1) and conversely. If f is a first integral we have that 0 = d dt f(x(t)) = d x(t) Df(x(t)) = v(x(t)) Df(x(t)), dt 1

and since this holds for any solution, f satisfies (1) in all of U. Conversely, the same differentiation argument showas that any solution of (1) in U must be a first integral for (2). This lemma is a very elementary observation but suggest already how to find solutions. Example 1 Consider the PDE xu x + yv y + xy(1 + z 2 )u z = 0 in all of IR 3. The associated system of ODE s is given by x = x, y = y, z = xy(1 + z 2 ), whose solution is given by x(t) = e t x 0, y(t) = e t y 0, z(t) = tan(c + e 2t x 0 y 0 /2). Now, by eliminating the time t, one finds two first integrals, x/y, and tan 1 (z) xy/2. It is easy to check that these two functions are solutions of the PDE. Example 2 The following PDE in IR n is completely elementary but very important. u x 1 = 0. Obviously any solution is given in the form g(x 2,..., x n ). Analyzing this example from the ODE point of view leads to the following system (d/dt)x 1 = 1, (d/dt)x 2 = 0,..., (d/dt)x n = 0, where, obviously, x 2,..., x n are first integrals. Thus, any function of these first integrals is a solution of our PDE. Here is a further observation. Lemma 2 If f 1,... f k are first integrals, so is F (f 1,..., f k ) where F is any differentiable function in k variables. 2

This is immediate since is the functions f 1,..., f k are constant along the solutions of (2) so is F (f 1,..., f k ). Our aim is to get our hands on the general solution, i.e., we would like to find all solutions of (1). In Example 2, this problem was solved completely. The coordinate functions x 2,... x n are independent, none of them can be expressed in terms of the others. The following definition is reasonable. Definition 2: Independence of Functions The k functions f 1,... f k are independent on some domain U if for every point x U the gradients Df j (x) are linearly independent Intuitively, one would like to say that in a set of independent functions none can be expressed as a function of the others. Definition 2 gives us a computational way of expressing what we mean by this. E.g., suppose that f 1, f 2 and f 3 are dependent in the sense that there is a function F such that f 1 = F (f 2, f 3 ) maybe in some small neighborhood of some point x 0. Then f 1 = F 1 f 2 + F 2 f 3 with F 1, F 2 denoting the partial derivatives of F. Thus the gradients are linearly dependent. Now one can formulate the following theorem. Recall that a critical point of a vector field is a point x such that v(x) = 0. Theorem 1 Let f 1,... f n 1 be n 1, independent first integrals of the system (2). Assume that v(x) has no critical point in U and let g be any solution of (1). Then for any point x 0 in U there is a neighborhood V and a differentiable function F so that g = F (f 1,, f n 1 ). Having these independent first integrals gives a complete solution of the PDE in the neighborhood V. We call f 1,..., f n 1 a complete set of integrals. If g is a solution of (1) then it is a first integral. This first integral cannot be independent from f 1,..., f n 1. If it were, then, by the inverse function theorem, one could express, at least locally, the coordinates x 1,... x n in terms of the integrals f 1,..., f n 1 and g. Since these are constant along solutions of (2) this means that x 1,..., x n are constant as well. In other words, the point x 1,..., x n must be a stationary or critical point of v(x) which contradicts our assumption. Now, it remains to construct the function F. Pick a point x 0 U and look at the hypersurface defined by the equations f j (x) = f j (x 0 ), j = 1,..., n 1. 3

Since the functions f j are independent, the intersection of all these hypersurfaces forms a curve. This is in fact the curve along which the solution of (2) starting at x 0 moves. Denote by e a unit vector in the direction of v(x 0 ). In the vicinity of x 0 consider the new coordinates y 1 = f 1 (x), y 2 = f 2 (x),... y n 1 = f n 1 (x), y n = e x. (3) We claim that this is a coordinate transformation, i.e., invertible. The Jacobi matrix of this transformation at the point x 0 is given by [Df 1, Df 2,..., Df n 1, e] T (x 0 ), here the subscript T stands for transposition. (We think of the components of Df k arranged as columns.) Since the vector e is perpendicular to Df j for all j, and since the functions f j are independent, this matrix is invertible. By the inverse function theorem, in a vicinity of x 0 there exist functions h 1,..., h n so that Define the function F := g h, i.e., Returning to the x variables, we see that x j = h j (y), j = 1,..., n. F (y 1,..., y n ) = g(h 1 (y),..., h n (y)). F (f 1 (x),..., f n 1 (x), e x) = g(x), and since g(x) is a first integral, so is the left side of the above equation. Thus, 0 = n 1 k=1 F y k v(x) f k (x) + F y n v(x) e and since the functions f k are first integrals we must have that F y n v(x) e = 0 in a vicinity of x 0. Since v(x) does not vanish in U and is parallel to e at x 0, it follows that F does not depend on y n and hence g(x) = F (f 1,..., f n 1 ) in a vicinity of x 0. The previous theorem raises the question whether we always have n 1 independent first integrals. This is in fact true, provided v(x) does not have a critical point. 4

Theorem 2: Existence of First Integrals Let x 0 be a point in U so that v(x 0 ) 0. Then in a vicinity V of x 0 there exist n 1 first integrals. Without restriction of generality we may assume that x 0 is the origin and that v(0) = (0,..., 0, 1). By the existence and uniqueness theorem of ODE s for any x sufficiently close to the origin there exists an initial condition y on the hyperplane x n = 0 and a time s so that the solution Ψ(s, y) = x. The map Ψ : (s, y) x is differentiable and has a differentiable inverse at least for x sufficiently close to the origin. Just run the time backwards until one hits the hypersurface. As a consequence Ψ s, Ψ, k = 1,..., n 1 (4) yk are linearly independent close to the origin. Thus, there exists a vicinity V of the origin where we may choose (s, y) as a new coordinate system. How does the differential equation (2) look like in this new coordinate system? Let x(t) be a solution that lives for small times in V. We would like to find s(t), y(t), with y(t) on the hyperplane so that x(t) = Ψ(s(t), y(t)). Taking the derivative with respect to t we get v(x(t)) = d dt x = Ψ s Recall that by the definition of the flow Ψ and hence we have that Ψ s (ds n 1 ds dt + k=1 Ψ (s(t), y(t)) = v(x(t) s n 1 dt 1) + k=1 Ψ dy k y k dt Ψ dy k y k dt = 0. By the linear independence noted in (4) we must have that ds dt = 1, dy k dt = 0, k = 1,..., n 1. The functions y j (x), j = 1,..., n 1 are precisely the first integrals. They are independent since the functions y j are independent. Note, that the choice of the hyperplane was not important. Any hypersurface passing through the origin would have done the job as long as v(x) is not tangent to this hypersurface at the origin.. Definition 3: Characteristics The equation (2) is called the characteristic equation and its solutions are called the characteristics associated with the PDE (1). We have seen that the solutions of the PDE (1) are constant along characteristic lines. 5

What is clear so far is that the PDE (1) if it has solutions, it has infinitely many. Certainly one is not just interested in the general solution but in those that satisfy certain initial conditions. Definition 4: (Cauchy Problem) The PDE v(x) Du(x) = 0 in U together with the condition u = g on Γ, where Γ is a hypersurface in U, is called the Cauchy Problem. The Cauchy problem is not always well defined, for suppose that a characteristic line intersects Γ twice in the points x 1 and x 2 and g(x 1 ) g(x 2 ) then no solution exists to this problem. Of course, in a small neighborhood around x 1 this problem maybe does not manifest itself. Another problem is that the characteristic curves may tangent to the surface Γ. This is quite serious, in fact there can be no solution, not even locally. Say, the characteristic curve touches Γ at the point x 1. This means that v(x 1 ) is tangent to the surface Γ. Picking an initial condition such that v(x 1 ) Dg(x 1 ) 0 contradicts immediately the PDE. Thus we make the following Definition 5 The hypersurface Γ is called noncharacteristic at a point x 0, if v(x 0 ) is not tangent to the surface. Any definition should be the hypotheses of a good theorem, and hence we have Theorem 3 Assume that Γ is noncharacteristic at the point x 0. The there exists a neighborhood around x 0 so that the Cauchy problem has a unique solution. This can be understood as follows. Pick a point x and solve the system (2) with the x as the initial condition. If x is close enough to x 0 by either moving forward or backward in time the solution will hit the surface Γ at precisely one point y(x) at time t(x). Since the solution of the PDE (1) is constant along characteristic lines (here we use that the surface Γ is non characteristic), we have that u(x) = g(y(x)). This theorem in principle tells us how to calculate the solution. Certainly, if the solutions of the ODE can be calculates explicitely one can use a complete set of first 6

integrals to find the solution of the Cauchy problem. numerical methods. Otherwise one has to resort to Example 3 Consider the PDE yu x xu y = 0 subject to the initial condition u(x, x 2 ) = g(x), for all x > 0. The characteristic equations are x = y, y = x. The solution curves are concentric circles and the first integrals are of the form f(x 2 + y 2 ). The initial condition says that f(x 2 + x 4 ) = g(x) and hence ( ) s f(s) = g + 1/4 1/2 and ( ) x2 u(x, y) = g + y 2 + 1/4 1/2. Example 4 Example 1 revisited. Consider in all of IR 3 subject to the initial condition xu x + yv y + xy(1 + z 2 )u z = 0 u(x, y, 0) = g(x, y), for some given function g. The associated system of ODE s has been solved before with the first integrals x/y, and xy 2 tan 1 (z). Thus, the solution must be of the form The initial condition requires that which leads to u(x, y, z) = f(x/y, xy 2 tan 1 (z). f(x/y, xy) = g(x, y) ( f(s, t) = g ± st, ± ) t/s where any combinations of the signs are allowed. Therefore the solution is given by ( ) u(x, y, z) = g ± x 2 2 tan 1 (z)/y, ± y 2 2 tan 1 (z)/x, where the signs are the same as of the variables x and y. Note that the solution does not exist for all values of x, y, z. Certainly, one has to require that the signs of the expressions under the root signs must be nonnegative. 7,