How to solve quasi linear first order PDE. v(u, x) Du(x) = f(u, x) on U IR n, (1)

Similar documents
b) The system of ODE s d x = v(x) in U. (2) dt

The first order quasi-linear PDEs

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

Module 2: First-Order Partial Differential Equations

(z 0 ) = lim. = lim. = f. Similarly along a vertical line, we fix x = x 0 and vary y. Setting z = x 0 + iy, we get. = lim. = i f

( ) and then eliminate t to get y(x) or x(y). Not that the

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Chapter 0 of Calculus ++, Differential calculus with several variables

ALGEBRAIC GEOMETRY HOMEWORK 3

Chapter 4 Notes, Calculus I with Precalculus 3e Larson/Edwards

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

PDE: The Method of Characteristics Page 1

g(2, 1) = cos(2π) + 1 = = 9

Lecture 10. (2) Functions of two variables. Partial derivatives. Dan Nichols February 27, 2018

MATH 425, HOMEWORK 3 SOLUTIONS

MATH 425, FINAL EXAM SOLUTIONS

Implicit Functions, Curves and Surfaces

MATH 220: MIDTERM OCTOBER 29, 2015

Lecture 5 - Logarithms, Slope of a Function, Derivatives

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

MATH 173: PRACTICE MIDTERM SOLUTIONS

Math 40510, Algebraic Geometry

Lecture 2: Review of Prerequisites. Table of contents

The Implicit Function Theorem

Lecture two. January 17, 2019

Lecture 7 - Separable Equations

DIFFERENTIAL EQUATIONS

Modeling and Analysis of Dynamic Systems

Section 14.1 Vector Functions and Space Curves

ENGI Partial Differentiation Page y f x

APPLICATIONS OF DIFFERENTIABILITY IN R n.

Sec. 1.1: Basics of Vectors

On linear and non-linear equations. (Sect. 1.6).

MATH 32A: MIDTERM 2 REVIEW. sin 2 u du z(t) = sin 2 t + cos 2 2

MATH 220 CALCULUS I SPRING 2018, MIDTERM I FEB 16, 2018

Slope Fields: Graphing Solutions Without the Solutions

Math 201 Lecture 25: Partial Differential Equations: Introduction and Overview

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Math 131, Lecture 20: The Chain Rule, continued

The linear equations can be classificed into the following cases, from easier to more difficult: 1. Linear: u y. u x

Lecture notes: Introduction to Partial Differential Equations

ORDINARY DIFFERENTIAL EQUATIONS: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations

1 Functions of many variables.

VANDERBILT UNIVERSITY. MATH 2300 MULTIVARIABLE CALCULUS Practice Test 1 Solutions

Partial Derivatives October 2013

The Liapunov Method for Determining Stability (DRAFT)

2.2. Methods for Obtaining FD Expressions. There are several methods, and we will look at a few:

MATH 220: Problem Set 3 Solutions

Chapter 1 of Calculus ++ : Differential calculus with several variables

9. Birational Maps and Blowing Up

Class Meeting # 1: Introduction to PDEs

March 4, 2019 LECTURE 11: DIFFERENTIALS AND TAYLOR SERIES.

Mathematics II. Tutorial 2 First order differential equations. Groups: B03 & B08

Nonlinear stability of time-periodic viscous shocks. Margaret Beck Brown University

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

1 Calculus - Optimization - Applications

The Maximum and Minimum Principle

Lesson Objectives: we will learn:

Example: Limit definition. Geometric meaning. Geometric meaning, y. Notes. Notes. Notes. f (x, y) = x 2 y 3 :

Part 1. The diffusion equation

Math 212-Lecture 8. The chain rule with one independent variable

Definition 3 (Continuity). A function f is continuous at c if lim x c f(x) = f(c).

MATH 131P: PRACTICE FINAL SOLUTIONS DECEMBER 12, 2012

Lecture 6: Introduction to Partial Differential Equations

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS I: Introduction and Analytic Methods David Levermore Department of Mathematics University of Maryland

ORDINARY DIFFERENTIAL EQUATION: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

Mathematical Methods - Lecture 9

2. Method of Characteristics

ENGI Partial Differentiation Page y f x

Introduction to Functions

Lecture Introduction

Solutions to Homework # 1 Math 381, Rice University, Fall (x y) y 2 = 0. Part (b). We make a convenient change of variables:

Elliptic Curves and Public Key Cryptography

0. Introduction 1 0. INTRODUCTION

Introduction to Vectors

Picard s Existence and Uniqueness Theorem

f(s)ds, i.e. to write down the general solution in

Math 361: Homework 1 Solutions

First-Order Ordinary Differntial Equations II: Autonomous Case. David Levermore Department of Mathematics University of Maryland.

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

MATH 2374 Spring Worksheet 4

α u x α1 1 xα2 2 x αn n

PUTNAM PROBLEMS DIFFERENTIAL EQUATIONS. First Order Equations. p(x)dx)) = q(x) exp(

10. Smooth Varieties. 82 Andreas Gathmann

then the substitution z = ax + by + c reduces this equation to the separable one.

d F = (df E 3 ) E 3. (4.1)

First order di erential equations

APPPHYS217 Tuesday 25 May 2010

14 Divergence, flux, Laplacian

Math 311, Partial Differential Equations, Winter 2015, Midterm

Calculus C (ordinary differential equations)

Chapter 10. Definition of the Derivative Velocity and Tangents

Advanced Partial Differential Equations with Applications

Section 3.5 The Implicit Function Theorem

Burgers equation - a first look at fluid mechanics and non-linear partial differential equations

Stable shock formation near plane-symmetry for nonlinear wave equations

Multivariable Calculus Notes. Faraad Armwood. Fall: Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates

S. N. Kruzhkov s lectures on first-order quasilinear PDEs

Transcription:

How to solve quasi linear first order PDE A quasi linear PDE is an equation of the form subject to the initial condition v(u, x) Du(x) = f(u, x) on U IR n, (1) u = g on Γ, (2) where Γ is a hypersurface in U. We assume again that the field v(g(x 0 ), x 0 ) is not zero for x 0 Γ. The only difference when compared with linear first order PDE is that the vector field v and the inhomogeneity may depend on the unknown function u. We prove the Theorem Assume that there exists a point x 0 Γ such that that the surface Γ is not characteristic at that point, i.e., the vector v(g(x 0 ), x 0 ) is not tangent to Γ at x 0. Then, in a vicinity of the point x 0, the PDE (1) with the initial condition (2) has a unique solution. One can look at this problem in many ways and one of them is the following. The graph of the solution forms an n dimensional surface in IR n+1. This surface can be described locally around the point x 0 as the solution of an equation of the form U(u(x), x) = 0 for some function U = U(x 0, x) of n + 1 variables. A sufficient condition for solving for u(x) is that U x 0(g(x 0 ), x 0 ) does not vanish. The implicit function theorem then garantees the existence of a solution u(x) in the vicinity of the point x 0. This function U should satisfy the initial conditions U(g(x), x) = 0 for all x Γ in a vicinity of the point x 0 which also sits on Γ. Note that this amounts to an initial condition on a surface of dimension n 1 in IR n+1. Next, one writes an equation for U. We shall show later the connection with the original equation (1). Introduce the new vector field V (x 0, x) = (f(x 0, x), v(x 0, x)) and denote the gradient with respect to all variables (x 0, x) by. Abreviate (x 0, x) by y, and onsider the PDE V (y) U = 0 in U IR IR n+1 (3) subject to the initial conditions We want to add as another condition that U(g(x), x) = 0 for x Γ. (4) U x 0(g(x 0 ), x 0 ) 0. (5) 1

Recall that this condition is important for solving for u(x). Notice that there exist n independent integrals for the equation y = V (y), (6) but the intitial condition requires that the function U be constant on a surface of dimension n 1. Thus, the solution of the problem (3),(4) will be, in general, not be unique. E.g., the zero function will be certainly a solution satisfying (3) and (4) but does not deliver us any reasonable candidate for a solution to (1) and (2). Certainly, one can extend the initial condition in the vicinity of x 0 to an n 1 dimensional surface, but keeping of course the condition (4). This can be done by extending the condition (4) to the condition U(x 0, x) = x 0 g(x) for x Γ. (7) This is now an initial condition on the hypersurface γ, that is parallel to the x 0 -axis and passes through the curve Γ. Another way to express this is, that the projection of γ along the x 0 axis yields Γ. This surface γ contains also the curve (g(x), x), x Γ, in particular the point (g(x 0 ), x 0 ) is in γ. Note that U(g(x), x) = 0. First we show that the initial data of this new, extended problem is non characteristic. The vectors tangent to γ at the point (g(x 0 ), x 0 ) are of the form [ ] a τ where τ is any unit vector, tangent to Γ at the point x 0. Clearly, the vector V at the point (g(x 0 ), x 0 ) is not tangent to the surface γ at (g(x 0 ), x 0 ) since v is not tangent to Γ and hence not parallel to any of the vectors τ. Thus, the initial conditions are non characteristic and there exists a unique solution for the extended problem. Next, we note that U x 0(g(x 0 ), x 0 ) = 1 by (7). This shows (5). Thus, since U x0 (g(x 0 ), x 0 ) does not vanish, the implicit function theorem garantees the existence of a unique solution to the equation U(u(x), x) = 0 in the vicinity of x 0 which satisfies u(x) = g(x) for x Γ. Next we check that the function u(x) defined implicitely by the equation U(u(x), x) = 0 solves our original PDE., Certainly, in a vicinity of x 0 Du = DU U x 0 2

where DU denotes the part of U coming form the x variables. Thus, v Du = v DU U x 0 = f using (3). Note that in this picture, solving the quasi linear equation looks quite simple. The vector field V has n independent first integrals, call them I 1 (y),..., I n (y), recalling that y = (x 0, x). Do not forget that these integrals are functions of n + 1 variables. Since the surface Γ is non characteristic we can, at least in a small neighborhood of x 0, choose these integrals as coordinates for γ. In other words, every point y = (x 0, x) in this neighborhood can be written as y = f(i), i.e., x 0 = f 0 (I) and x = f(i) for some functions f 0 and f. Clearly, f 0 (I) g(f(i)) is the solution we are looking for since on the surface γ it reduces to x 0 g(x). The function u(x) is found by solving the equation f 0 (I(u(x), x)) g(f(i(u(x), x)) = 0. In other words, the graph of u is the set of all points where f 0 (I(y)) g(f(i(y)) vanishes. Example 1 u x + uu y = 0, u(0, y) = g(y). Extend this problem to U x + zu y = 0, U(0, y, z) = z g(y). The characteristic equations are x = 1, y = z, z = 0, with solutions x = t + x 0, z = z 0, y = z 0 t + y 0. The two independent integrals are z, y zx. Thus, U(x, y, z) = f(z, y zx), and f(z, y) = z g(y). Therefore, our solution U is given by U(x, y, z) = z g(y zx). 3

Now, U z (x, y, z) = 1 + xg (y zx) and hence for x = 0 U z (0, y, z) = 1 and by the implicit function theorem there is always a solution of our original initial value problem, at least locally. Pick as an example g(y) = y 2. Solving the equation z = (z zx) 2 for z yields the solution u(x, y) = 1 + 2xy 1 + 4xy 2x 2 for x > 0 and 1 + 4xy > 0. Of course lim u(x, y) = x 0 y2. In this picture, uniqueness is not so easy to see. After all the process of finding the solution depends on an extension of the problem that is quite arbitrary. The following point of view makes up for that. Implicitely, in solving (3) the characteristic equation (6) has been used. All the first integrals refer to that equation, which when written in detail is (d/dt)x 0 = f(x 0, x), (d/dt)x = v(x 0, x). (8) Given a solution u of (1). Consider a point on the graph of u, i.e., a point of the form (u(y), y). Solve equation (8) for this initial condition which yields the solution x 0 (t), x(t). The claim is, that the solution x 0 (t), x(t) stays on the graph of u, i.e., x 0 (t) = u(x(t)). To see that define z(t) by solving the differential equation (d/dt)z = v(u(z), z), z(0) = y. (9) Define z 0 (t) := u(z(t)). Certainly, z 0 (0) = u(y). Thus (z 0 (t), z(t)) has the same initial condition as (x 0 (t), x(t)). Next, (d/dt)z 0 = (d/dt)z Du(z) = v(u(z), z) Du(z), and hence (z 0 (t), z(t)) satisfies also the same ODE, and hence, by uniqueness of the solution x 0 (t) = z 0 (t) = u(z(t)) and x(t) = z(t) which proves our claim. The solution u(x) of the initial value problem (1),(2) can be understood in the following way. Start with any initial condition of the form (g(y), y) which is on the graph of the solution, and solve the characteristic equation. The solution curve will be on the the graph of u(x). By considering all initial conditions of the form (g(x), x) the corresponding solution curves trace out a surface that coincides with the graph of u in the vicinity of the point (g(x 0 ), x 0 ). For that it is important that the initial condition is noncharacteristic! This 4

procedure produces a unique surface and hence the solution of the initial value problem must be unique at least in a vicinity of x 0. This procedure, however, does not deliver the solution explicitely. For this purpose the one using first integrals is more useful. Often, however, there is no way that the solution can be explicitely given, and then one has to rely on qualitative reasoning for which the second picture is very well suited. We describe this for the same equation as in Example 1. Example 2 Consider again the equation with the initial condition u t + uu x = 0 (10) u(0, x) = g(x). The characteristic equations are ṫ = 1, ẋ = z, ż = 0. Solving them with the initial conditions x(0) = x 0, t(0) = 0, and z(0) = g(x 0 ) yields t(s) = s, z(s) = g(x 0 ), x(s) = g(x 0 )s + x 0. Recall that the geometric curve defined by these equations sits on the graph of the solution u(t, x). Hence we learn that the solution is constant along the line defined by the equation x = g(x 0 )t + x 0. From this one can see immediately where the problems arise. Pick an initial condition g so that g(1) = 2 and g(2) = 1. Obviously the (projected) characteristic lines intersect at the point x = 3 and t = 1. Hence, the solution cannot exist beyond the time t = 1. > > 2 1 x 1 1 2 3 4 1 2 3 5

If one thinks a bit how the equation (10) has been derived from physics, the loss of existence of solutions does not come as a great surprise. Consider a gas of particles in one dimension. These particles do not interact with each other and hence their individiual momenta and energies are conserved. Now, suppose that the only quantity of interest is the distribution on their velocities as a function of space. Thus, u(t, x) tells the velocity of the particles that sit at x at time t. The initial condition is g(x), the velocitiy distribution at time zero. How does u(t, x) evolve in time? At the point x at time t sits the particle that had its position at x tu(t, x) since the particles move with constant velocity. At time t = 0 the velocity of the particle at the point x tu(t, x) is given by g(x tu(t, x)) which is the same as the velocity at the point x at time t. Thus, u(t, x) = g(x tu(t, x)). Now, u t (t, x) = g (x tu(t, x))[u(t, x) + tu t (t, x)], and u x (t, x) = g (x tu(t, x))[1 tu x (t, x)]. Combining these two equations yields u t + uu x = g (x tu)[u + tu t u + tuu x ], or [u t + uu x ][1 + tg (x tu)] = 0, and hence for t small equation (10) results. Armed with this physical insight it is now obious what is happening, namely if the initial velocity distribution is decrasing on some interval, then in the course of the motion some of the faster particles will overtake the slower ones and the velocity distribution will tilt over, i.e., it ceases to be a function. One can take the attitude that this is a bad model and that it should be discarded. However, the assumption that particles are non interacting is for many practical purposes not such a bad assumption. The equation (10) can be viewed as a good approximation in a regime where the interactions can be neglected but has to be modified when the density gets large and hence the interactions become important. This modification leads to the shock wave theory and we pick up that subject at some later moment. 6