An Introduction to Matched Asymptotic Expansions (A Draft) Peicheng Zhu

Size: px
Start display at page:

Download "An Introduction to Matched Asymptotic Expansions (A Draft) Peicheng Zhu"

Transcription

1 An Introduction to Matched Asymptotic Expansions (A Draft) Peicheng Zhu Basque Center for Applied Mathematics and Ikerbasque Foundation for Science Nov., 2009

2 Preface The Basque Center for Applied Mathematics (BCAM) is a newly founded institute which is one of the Basque Excellence Research Centers (BERC). BCAM hosts a series of training courses on advanced aspects of applied and computational mathematics that are delivered, in English, mainly to graduate students and postdoctoral researchers, who are in or from out of BCAM. The series starts in October 2009 and finishes in June Each of the 13 courses is taught in one week, and has duration of 10 hours. This is the lecture notes of the course on Asymptotic Analysis that I gave at BCAM from Nov. 9 to Nov. 13, 2009, as the second of this series of BCAM courses. Here I have added some preliminary materials. In this course, we introduce some basic ideas in the theory of asymptotic analysis. Asymptotic analysis is an important branch of applied mathematics and has a broad range of contents. Thus due to the time limitation, I concentrate mainly on the method of matched asymptotic expansions. Firstly some simple examples, ranging from algebraic equations to partial differential equations, are discussed to give the reader a picture of the method of asymptotic expansions. Then we end with an application of this method to an optimal control problem, which is concerned with vanishing viscosity method and alternating descent method for optimal control of scalar conservation laws in presence of non-interacting shocks. i Peicheng Zhu Bilbao, Spain

3 Jan., 2010

4 Contents Preface i 1 Introduction 1 2 Algebraic equations Regular perturbation Iterative method Singular perturbation Rescaling Non-ingeral powers Ordinary differential equations First order ODEs Regular Singular Outer expansions Inner expansions Matched asymptotic expansions Second order ODEs and boundary layers Outer expansions Inner expansions Rescaling Matching conditions Matching by expansions Van Dyke s rule for matching Matched asymptotic expansions Examples Partial differential equations Regular problem Conservation laws and vanishing viscosity method Construction of approximate solutions iii

5 Outer and inner expansions Matching conditions and approximations Convergence An application to optimal control theory Introduction Sensitivity analysis: the inviscid case Linearization of the inviscid equation Sensitivity in presence of shocks The method of alternating descent directions: Inviscid case Matched asymptotic expansions and approximate solutions Outer expansions Derivation of the interface equations Inner expansions Approximate solutions Convergence of the approximate solutions The equations are satisfied asymptotically Proof of the convergence The method of alternating descent directions: Viscous case Appendix Bibliography 81 Index 84

6 Chapter 1 Introduction In the real world, many problems (which arise in applied mathematics, physics, engineering sciences,, also pure mathematics like the theory of numbers) don t have a solution which can be written a simple, exact, explicit formula. Some of them have a complex formula, but we don t know too much about such a formula. We now consider some examples. i) The Stirling formula: n! 2nπe n n n (1 + O ( 1 n )). (1.0.1) The Landau symbol the big Oh O and the Du Bois Reymond symbol are used. Note that n! grows very quickly as n and becomes so large that one can not have any idea about how big it is. But formula (1.0.1) gives us a good estimate of n!. ii) From Algebra we know that in general there is no explicit solution to an algebraic equation with degree n 5. iii) Most of problems in the theory of nonlinear ordinary or partial differential equations don t have an exact solution. And many others. In practice, however, an approximation of a solution to such problems is usually enough. Thus the approaches to finding such an approximation is important. There are two main methods. One is numerical approximation, which is especially powerful after the invention of the computer and is now regarded as the third most import method for scientific research (just after the traditional two: theoretical and experimental methods). Another is analytical approximation with an error which is understandable and controllable, in particular, the error could be made smaller by some rational procedure. The term analytical approximate solution means that an analytic formula of an approximate solution is found and its difference with the exact solution. 1

7 Asymptotic analysis is powerful tool for finding analytical approximate solutions to complicated practical problems, which is an important branch of applied mathematics. In 1886 the establishment of rigorous foundation was done by Poincaré and Stieltjes. They published separately papers on asymptotic series. Later in 1905, Prandtl published a paper on the motion of a fluid or gas with small viscosity along a body. In the case of an airfoil moving through air, such problem is described by the Navier-Stokes equations with large Reynolds number. The method of singular perturbation was proposed. Of course, the history of asymptotic analysis can be traced back to much earlier than 1886, even to the time when our ancestors studied the problem, as small as the measure of a rod, or as large as the study of the perturbed orbit of a planet. As we know, when we measure a rod, each measure gives a different value, so n-measures result in n-different values. Which one should choose to be the length of this rod? The best approximation to the real length of the rod is the mean value of these n-numbers, and each of the measures can be regarded as a perturbation of the mean value. The Sun s gravitational attraction is the main force acting on each planet, but there are much weaker gravitational forces between the planets, which produce perturbations of their elliptical orbits; these make small changes in a planet s orbital elements with time. The planets which perturb the Earth s orbit most are Venus, Jupiter, and Saturn. These planets and the sun also perturb the Moon s orbit around the Earth-Moon system s center of mass. The use of mathematical series for the orbital elements as functions of time can accurately describe perturbations of the orbits of solar system bodies for limited time intervals. For longer intervals, the series must be recalculated. Today, astronomers use high-speed computers to figure orbits in multiple body systems such as the solar system. The computers can be programmed to make allowances for the important perturbations on all the orbits of the member bodies. Such calculations have now been made for the Sun and the major planets over time intervals of up to several tens of millions of years. As accurately as these calculations can be made, however, the behavior of celestial bodies over long periods of time cannot always be determined. For example, the perturbation method has so far been unable to determine the stability either of the orbits of individual bodies or of the solar system as a whole for the estimated age of the solar system. Studies of the evolution of the Earth-Moon system indicate that the Moon s orbit may become unstable, which will make it possible for the Moon to escape into an independent orbit around the Sun. Recent astronomers have also used the theory of chaos to explain irregular orbits. The orbits of artificial satellites of the Earth or other bodies with atmospheres whose orbits come close to their surfaces are very complicated. The

8 orbits of these satellites are influenced by atmospheric drag, which tends to bring the satellite down into the lower atmosphere, where it is either vaporized by atmospheric friction or falls to the planet s surface. In addition, the shape of Earth and many other bodies is not perfectly spherical. The bulge that forms at the equator, due to the planet s spinning motion, causes a stronger gravitational attraction. When the satellite passes by the equator, it may be slowed enough to pull it to earth. The above argument gives us many problems with small perturbations, some of those perturbations can be omitted under suitable assumptions. The main contents of asymptotic analysis are as follows: perturbation method, the method of multi-scale expansions, averaging method, WKBJ (Wentzel, Kramers, Brillouin and Jeffreys) approximation, the method of matched asymptotic expansions, asymptotic expansion of integrals, and so on. This course is mainly concerned with the method of matched asymptotic expansions. Firstly we study some simple examples arising in algebraic equation, ordinary differential equations, from which we will get key ideas of matched asymptotic expansion, though those examples are simple. Then we shall investigate matched asymptotic expansion for partial differential equations and finally take an optimal control problem as an application. Let us now introduce some notations. D R d with d N denotes an open subset in R d. f, g, h : D R are real continuous functions. We denote a small quantity by ε. The Landau symbols the big Oh O and the little o will be used. Definitions. A sequence of gauge functions {φ n (x)} (n = 0, 1, 2, ) is said to be form an asymptotic sequence as x x 0 if, for all n, φ n+1 (x) = o(φ n (x)), as x x 0. If φ n (x) is an asymptotic sequence of gauge functions as x x 0, we say that a n φ n (x), a n are constant function, n=1 is an asymptotic expansion (or asymptotic approximation) of a function f(x) as x x 0, if for each N. f(x) = N a n φ n (x) + o(φ N (x)), as x x 0. n=1

9

10 Chapter 2 Algebraic equations In this chapter we shall investigate some algebraic equations, which are very helpful for establishing the picture of asymptotic analysis in our mind, though those examples are quite simple. Let us consider algebraic equations with a small positive parameter which is denoted by ε in what follows. 2.1 Regular perturbation Consider the following quadratic equation Suppose that ε = 0, equation (2.1.1) becomes x 2 εx 1 = 0. (2.1.1) x 2 1 = 0. (2.1.2) It is easy to find the roots x ε of (2.1.1), for any fixed ε, which read x ε 1 = ε + ε , and x ε 2 = ε ε (2.1.3) 2 Correspondingly, the roots x 0 of (2.1.2) are x 0 1,2 = ±1. A natural question arises: Does x ε converge to x 0? We prove easily that x ε 1 x 1, and x ε 2 x 2. (2.1.4) So (2.1.1) is called a regular perturbation of (2.1.2). The perturbation term is εx. A perturbation is called singular if it is not regular. For most of practical problems, however, we don t have the explicit formulas like (2.1.1). In this case, how can we get the knowledge about the limits as the small parameter ε goes to zero? 5

11 The method of asymptotic expansions is a powerful tool for such an investigation. Note that x ε depends on ε, to construct an asymptotic expansion, we define an ansatz as follows x ε = ε α 0 (x 0 + ε α 1 x 1 + ε α 2 x 2 + ), (2.1.5) here, α i (i = 0, 1, 2, ) are constant to be determined, and we assume, without loss of generality, that x 0, x 1, differ from zero, and 0 < α 1 < α 2 <. We first determine α 0. There are three cases. i) α 0 > 0, ii) α 0 < 0 and iii) α 0 = 0. We will show that only case iii) is possible to get an asymptotic expansion. Inserting ansatz (2.1.5) into equation (2.1.1). Balancing the resulting equation, we obtain x 2α 0 x ε 2α 0+α 1 x 0 x 1 + ε 2(α 0+α 1 ) x 2 1 x α 0 x 0 ε α 0+α 1 x ) = 0.(2.1.6) Suppose now that case i) happens, i.e. α 0 > 0, which implies α 0 is the smallest power. Thus from (2.1.6) it follows that the coefficient of ε α 0 should be zero, namely x 0 = 0, this is contradict to our assumption that x 0 0. For case ii), namely α 0 < 0, we have 2α 0 < α 0 < α 0 + α 1, thus 2α 0 is the smallest power and the coefficient of ε 2α 0 should be zero, so x 2 0 = 0, which violates our assumption too. Therefore, we assert that only the case α 0 = 0 is possible, and (2.1.5) becomes moreover, (2.1.6) now is x ε = x 0 + ε α 1 x 1 + ε α 2 x 2 +, (2.1.7) x ε α 1 x 0 x 1 + ε 2α 1 x 2 1 x 0 ε α 1 x ) = 0. (2.1.8) Similar to the above procedure for deciding α 0, we can determine α 1, α 2 etc., which are α 1 = 1, α 2 = 2,. So ansatz (2.1.5) takes the following form and the following expansions are obtained x ε = x 0 + ε 1 x 1 + ε 2 x 2 +, (2.1.9) ε 0 : x = 0, (2.1.10) ε 1 : 2x 0 x 1 x 0 = 0, (2.1.11) ε 2 : 2x 0 x 2 + x 2 1 x 1 = 0. (2.1.12) Solving (2.1.10) we have x 0 = 1 or x 0 = 1. We take the first case as an example and construct the asymptotic expansion. From (2.1.11) and (2.1.12) we get, respectively, x 1 = 1 2, x 2 = 1 8.

12 Up to i-terms (i = 1, 2, 3), we expand x ε as follows X ε 1 = 1, (2.1.13) X ε 2 = 1 + ε 2, (2.1.14) X ε 3 = 1 + ε 2 + ε2 8. (2.1.15) The next question then arise: How precisely does X ε i (i = 1, 2, 3) satisfy the equation of x ε? Straightforward computations yield that (X ε 1) 2 εx ε 1 1 = O(ε), (2.1.16) (X ε 2) 2 εx ε 2 1 = O(ε 2 ), (2.1.17) (X ε 3) 2 εx ε 3 1 = O(ε 4 ). (2.1.18) From which it is easy to see that Xi ε satisfies very well the equation when ε is small, and the error becomes smaller as i is larger, which means that we take more terms. 2.2 Iterative method In this section we are going to make use the so called iterative method to construct asymptotic expansions again for equation (2.1.1). We rewrite (2.1.1) as follows x = ± 1 + εx, where x = x ε. This formula suggests us an iterative procedure, x n+1 = 1 + εx n (2.2.1) for any n N. Here we only take the positive root as an example. Let x 0 be a fixed real number. One then obtains from (2.2.1) that x 1 = 1 + ε 2 x 0 +, (2.2.2) so we find the first term of an asymptotic expansion, however the second term in (2.2.2) still depends on x 0. To get the second term of an asymptotic expansion, we iterate once again and arrive at x 2 = 1 + ε 2 x 1 + = 1 + ε 2 +, (2.2.3)

13 this gives us the desired result. After iterating twice, we then construct an asymptotic expansion: x ε = 1 + ε 2 +. (2.2.4) The shortage of this method for constructing an asymptotic expansion is that we don t have an explicit formula, like (2.2.1), which guarantees the iteration converges. 2.3 Singular perturbation We now investigate the following equation which will give us very different results. Suppose that ε = 0, equation (2.3.1) becomes εx 2 x 1 = 0. (2.3.1) x 1 = 0. (2.3.2) Therefore we see that one root of (2.3.1) disappears as ε becomes 0. This is very different from (2.1.1). It is not difficult to get the roots of (2.3.1) which read x ε = 1 ± 1 + 4ε. 2ε There hold, as ε 0, that and by a Taylor expansion, x ε = ε 2ε 1, x ε + = ε 2ε = 1 ( ε 2ε 2 + ) 2ε = 1 ε + 1 2ε +. (2.3.3) Therefore we see that one root i.e. x ε converges to that of the limit equation (2.3.2), another blows up at the rate 1. Thus we can not expect that an ε asymptotic expansion like we did for a regular problem is valid too in this case. How to find a suitable scale for such a problem? We shall make use of the rescaling technique.

14 2.3.1 Rescaling Suppose that we don t know a priori the correct scale for constructing an asymptotic expansion, then the rescaling technique helps us to find it. This subsection is concerned with it, and we take (2.3.1) as an example. Let δ be a real function in ε, and let x = δx, where δ = δ(ε) and X = O(1). A rescaling technique is to determine the function δ, consequently, a new variable X is found. Rewriting (2.3.1) in X, we have εδ 2 X 2 δx 1 = 0. (2.3.4) By comparing the coefficients of (2.3.4), namely, εδ 2, δ, 1, we divide the rescaling argument into five cases. Case i) δ << 1. Then (2.3.4) can be written as 1 = εδ }{{ 2 X} 2 }{{} δx = o(1), (2.3.5) o(1) o(1) which can not be true since the left hand side of (2.3.5) while the right hand side is a very small quantity. Case ii) δ = 1, which means there is no any change to (2.3.1). (2.3.4) becomes εx 2 X 1 = 0, (2.3.6) }{{} o(1) thus it is impossible that X = 1 and we can construct a regular asymptotic expansion but cannot recover the lost root. Case iii) 1 << δ << 1 which implies that δε << 1. Dividing equation (2.3.4) ε by δ we obtain and X = o(1). This is impossible. εδx }{{} 2 X 1 = 0, (2.3.7) δ o(1) }{{} o(1)

15 Case iv) δ = 1, namely δε = 1, also δ >> 1 since we assume that ε << 1. ε Consequently, we infer from (2.3.4) that X 2 X Thus X 0 or 1. This gives us the correct scale. 1 = 0, (2.3.8) }{{} δ o(1) Case v) δ >> 1 ε, thus δε >> 1. Multiplying (2.3.4) by ε 1 δ 2 yields X 2 (εδ) 1 X }{{} o(1) and X = o(1). So this is not a suitable scale either. In conclusion, the suitable scale is δ = 1 ε, thus (2.3.4) is turn out to be 1 = 0, (2.3.9) }{{} εδ 2 o(1) x = X ε. A singular problem is reduced to a regular one. X 2 X ε = 0. (2.3.10) We now turn back to equation (2.3.1). Rescaling suggests us to use the following ansatz: x ε = ε 1 x 1 + x 0 + εx 1 +. (2.3.11) Inserting into (2.3.1) and comparing the coefficients of ε 1 (i = 1, 0, 1, ) on both sides of (2.3.1), we obtain ε 1 : x 2 1 x 1 = 0, (2.3.12) ε 0 : 2x 0 x 1 x 0 1 = 0, (2.3.13) ε 1 : x x 1 x 1 x 1 = 0. (2.3.14) The roots of (2.3.12) are x 1 = 1 and x 1 = 0. The second root does not yield a singular asymptotic expansion, thus it can be excluded easily. We consider now x 1 = 1. From (2.3.13) and (2.3.14) one solves x 0 = 1, x 1 = 1.

16 Therefore, we construct approximations X ε i i = 0, 1, 2, of the root x ε + by X ε 2 = 1 ε + 1 ε, (2.3.15) X ε 1 = 1 ε + 1, (2.3.16) and X ε 0 = 1 ε. (2.3.17) How precisely do they satisfy equation (2.3.1)? Computations yield ε(x ε 0) 2 X ε 0 1 = 1, correspondingly, x ε + X ε 0 = 1 2ε + o(ε) 0. This means that an expansion with one term is not a good approximation. and ε(x ε 1) 2 X ε 1 1 = ε, ε(x ε 2) 2 X ε 2 1 = O(ε 2 ), meanwhile, x ε + X ε 1 = O(ε), and x ε + X ε 2 = O(ε 2 ). Thus X ε 1, X ε 2 are a good approximation to x ε +. Moreover, we can conclude that the more terms we take, the more precise the approximation is. We also figure out the profile approximately of x ε +, in other word, we know now how the root disappears by blowing up as ε goes to zero. 2.4 Non-ingeral powers Any of the asymptotic expansions in Sections 1.1 and 1.2 are a series with integral powers. However this is in general not true. Here we give an example. Consider (1 ε)x 2 2x + 1 = 0. (2.4.1) Define, as in previous sections, an ansatz as follows x ε = x 0 + εx 1 + εx 2 +, (2.4.2)

17 inserting (2.4.2) into (2.4.1) and balancing both sides yield ε 0 : x 2 0 2x = 0, (2.4.3) ε 0 : 2x 0 x 1 2x 1 x 2 0 = 0, (2.4.4) ε 1 : 2x 0 x 2 2x 2 + x 2 1 2x 0 x 1 = 0. (2.4.5) From (2.4.3) one gets x 0 = 1, whence (2.4.4) implies 2x 1 2x = 0, that is 1 = 0, a contradiction. So (2.4.2) is not well-defined. Now we define an ansatz as x ε = x 0 + ε α x 1 + ε β x 2 +, (2.4.6) where 0 < α < β < are constants to be determined. inserting this ansatz into (2.4.1) and balancing both sides, we obtain there must hold that and the correct ansatz is α = 1 2, β = 1,. x ε = x 0 + ε 1 2 x1 + εx 2 + ε 3 2 x3 +. (2.4.7) The remaining part of construction is similar to previous ones. We have x ε = 1 ± ε ε ± ε

18 Chapter 3 Ordinary differential equations We start this chapter with some definitions. Consider L 0 [u] + εl 1 [u] = f 0 + εf 1, in D. (3.0.1) and the associated equation corresponding to the case that ε = 0 L 0 [u] = f 0. in D. (3.0.2) Here, L 0, L 1 are known operators, either ordinary or partial; f 0, f 1 are given functions. The terms εl 1 [u] and εf 1 are called perturbations. E ε (E 0 respectively) denotes the problem consisting equation (3.0.1) ((3.0.2) respectively) and suitable boundary/initial conditions. The solution to problem E ε (or E 0 ) is denoted by u ε (or u 0 ). Definition. Problem E ε is regular if u ε u 0 D 0, as ε 0. Otherwise, Problem E ε is referred to a singular one. Here D is a suitable norm over domain D. Note that a problem is regular or singular depends on the choice of the norm, which can be clarified by the following problem. Example. Let D = (0, 1). A real function φ : D R is a solution to ε d2 φ dx + dφ 2 dx = 0, in D, (3.0.3) φ x=0 = 0, φ x=1 = 1. (3.0.4) The solution φ is φ = φ(x; ε) = 1 e x ε, 1 e 1 ε 13

19 which is monotone increasing. Now we define two norms. i) φ D = max D φ, then problem (3.0.3) (3.0.4) is singular since φ φ 0 D = 1 where φ 0 = 0 or φ 0 = 1. ii) Define ( ) 1 φ D = φ 2 2, choose φ 0 = 1 which satisfies dφ = 0, then we can prove easily that φ dx φ 0 D 0 as ε 0, whence problem (3.0.3) (3.0.4) is regular. In what follows, we restrict ourself to consider only the maximum norm, and in the remaining part of this chapter the domain D is defined by D = (0, 1). D 3.1 First order ODEs Regular In this subsection we first consider a regular problem of ordinary differential equations of first order. Consider and its associated problem du + u dx = εx, in D. (3.1.1) u(0) = 1, (3.1.2) du + u dx = 0, in D. (3.1.3) u(0) = 1, (3.1.4) We can solve easily these problems whose solution reads u ε (x) = (1 + ε)e x + ε(x 1), u 0 (x) = e x. (3.1.5) Calculating the difference of these two solutions yields u ε u 0 D = ε max e x + x 1 0 D as ε 0. Therefore, problem (3.1.1) (3.1.2) is regular, and the term εx is a regular perturbation. But, in general, one can not expect that there are the explicit simple formulas, like (3.1.5), of exact solutions. Thus we next deal with this problem in

20 a general way, and employ the method of asymptotic expansions. To this end, we define an ansatz u ε (x) = u 0 (x) + εu 1 (x) + ε 2 u 2 (x) +, (3.1.6) insert it into equation (3.1.1) to get ε 0 : u 0 + u 0 = 0, u 0 (0) = 1, (3.1.7) ε 0 : u 1 + u 1 = x, u 1 (0) = 0, (3.1.8) ε 1 : u 2 + u 2 = 0, u 2 (0) = 0. (3.1.9) The condition u 0 (0) = 1 follows from (3.1.2) and ansatz (3.1.6). In fact, there holds 1 = u ε (0) = u 0 (0) + εu 1 (0) + ε 2 u 2 (0) + u 0 (0), (3.1.10) thus, u 0 (0) = 1. With this in hand, we use again (3.1.10) to derive the condition u 1 (0) = 0. We obtain 0 = εu 1 (0) + ε 2 u 2 (0) +, whence 0 = u 1 (0) + εu 2 (0) + (3.1.11) Letting ε 0 we get u 1 (0) = 0. In a similar manner, we derive the condition u 2 (0) = 0. Solving problems (3.1.7), (3.1.8) and (3.1.9) we have u 0 (x) = e x, u 1 (x) = x 1 + e x, u 2 (x) = 0. (3.1.12) Thus the approximations can be constructed: U ε 0(x) = e x, (3.1.13) U ε 1(x) = U ε 2(x) = (1 + ε)e x + ε(x 1). (3.1.14) Simple calculation shows that U ε 0(x) satisfies (3.1.1) with a small error (U ε 1(x)) + U ε 1(x) εx = O(ε), and condition (3.1.2) is satisfied exactly. Note that U ε 1(x) = U ε 2(x) are equal to the exact solution (3.1.5), so they solve problem (3.1.1) (3.1.2), and are a very good approximation.

21 3.1.2 Singular Now we are going to study singular perturbation and boundary layers. The perturbed problem is and the associated one is The exact solutions of problem (3.1.15) (3.1.16) is Let u 0 (x) = x. Computation yields ε du + u dx = x, in D. (3.1.15) u(0) = 1, (3.1.16) u = x, in D. (3.1.17) u(0) = 1, (3.1.18) u ε (x) = (1 + ε)e x + x ε, (3.1.19) u ε u 0 D = max (1 + ε)e x ε = 1, D for any positive ε. Therefore, by definition, problem (3.1.15) (3.1.16) is singular, and the term ε du is a singular perturbation. dx We next want to employ the method of asymptotic expansions to study this singular problem, for such a problem at least one boundary layer arises and a matched asymptotic expansion is suitable for it. We will construct outer and inner expansions which are valid in the so-called outer and inner regions, respectively. Then we derive matching conditions which are enable us to establish an asymptotic expansion which is valid uniformly in the whole domain. Thus we start with the construction of outer expansions Outer expansions The ansatz for deriving outer expansion is just of the form of a regular expansion: u ε (x) = u 0 (x) + εu 1 (x) + ε 2 u 2 (x) +, (3.1.20) Similar to the approach for asymptotic expansion of the regular problem (3.1.1) (3.1.2), we obtain ε 0 : u 0 (x) = x, (3.1.21) ε 1 : u 1 (x) + u 0(x) = 0, (3.1.22) ε 2 : u 2 (x) + u 1(x) = 0. (3.1.23)

22 Solving the above problems yields Then we get approximations: u 0 (x) = x, (3.1.24) u 1 (x) = 1, (3.1.25) u 2 (x) = 0. (3.1.26) O ε 2(x) = x ε. (3.1.27) Moreover, from (3.1.24) we obtain that u 0 (0) = 0 which differs from the given condition u 0 (0) = 1, thus there is a boundary layer appearing at x = Inner expansions To construct inner expansions, we introduce a new variable, the so-called fast variable: ξ = x ε, On one hand, for a beginner to the theory of asymptotic analysis, it may be not easy to understand why we define ξ in this form, the power of ε is 1? To convince oneself, one may assume a more general form as ξ = x with α R. ε α Then repeating the procedure we will carry out later in next subsection, we prove that α must be equal to 1 in order to get an asymptotic expansion. On the other hand we already assume that a boundary layer (Definition!) occurs at x = 0. If the assumption is incorrect, the procedure will break down when you try to match the inner and outer expansions in the intermediate region. At this point one may assume that there exists a boundary layer near a point x = x 0. The following analysis is the same, except that the scale transformation in the boundary layer is ξ = x x 0. We shall carry out the analysis for determining δ ε δ in the next section. An inner expansion is in terms of ξ, we assume that u ε (x) = U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) +. (3.1.28) It is easy to compute that for i = 0, 1, 2,, du i (ξ) dx Invoking equation (3.1.1) we arrive at = 1 ε du i (ξ). dξ ε 1 : U 0 + U 0 = 0, (3.1.29) ε 0 : U 1 + U 1 = ξ, (3.1.30) ε 1 : U 2 + U 2 = 0. (3.1.31)

23 From which we have U 0 (ξ) = C 0 e ξ, U 1 (ξ) = C 1 e ξ + ξ 1, U 2 (ξ) = C 2 e ξ. (3.1.32) Next step is to determine the constants C i with i = 0, 1, 2. To this end, we use the condition at x = 0 which implies that ξ = 0 too to conclude that U 0 (0) = 1, thus C 0 = 1. Similarly we have C 1 = 1, C 2 = 0. Therefore, an inner expansion can be obtained I ε 2(ξ) = (1 + ε)e ξ + ε(ξ 1). (3.1.33) Matched asymptotic expansions There are two main approaches to combine together the inner and outer expansions. The first one is to take the sum of the inner expansion (3.1.33) and the outer expansion (3.1.27), then subtract their common part which is valid in the intermediate region. To get a matched asymptotic expansion, it remains to find the common part. We start with U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) = u 0 (x) + εu 1 (x) + ε 2 u 2 (x) + O(ε 3 ). Following Fife [14], we rewrite x = εξ and expand the right hand side in terms of ξ. There hold U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) = u 0 (εξ) + εu 1 (εξ) + ε 2 u 2 (εξ) + O(ε 3 ) = u 0 (0) + u 0(0)εξ u 0(0)(εξ) 2 +ε(u 1 (0) + u 1(0)εξ) + u 2 (0) + O(ε 3 ) = u 0 (0) + ε(u 0(0)ξ + u 1 (0)) ( ) 1 +ε 2 2 u 0(0)ξ 2 + u 1(0)ξ) + u 2 (0) Therefore we obtain the following matching conditions + O(ε 3 ).(3.1.34) U 0 (ξ) = u 0 (0) = 0, (3.1.35) U 1 (ξ) u 0(0)ξ + u 1 (0) = ξ 1, (3.1.36) U 2 (ξ) 1 2 u 0(0)ξ 2 + u 1(0)ξ + u 2 (0) = 0, (3.1.37) for ξ. The common part is U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) = ε(ξ 1). The matched asymptotic expansion then is U ε 2(x) = I ε 2(ξ) + O ε 2(x) Common part = (1 + ε)e ξ + ε(ξ 1) + (x ε) ε(ξ 1) = (1 + ε)e ξ + x ε. (3.1.38)

24 So U ε 2(x) is just the exact solution to problem (3.1.15) (3.1.16). The second method for constructing a matched asymptotic expansion from inner and outer expansions is to make use of a suitable cut-off function to form a linear combination of inner and outer expansions. We define a function χ = χ(ξ) : R R + which is smooth such that χ(ξ) = { 1 if ξ 1, 0 if ξ 2, (3.1.39) and 0 χ(ξ) 1 if ξ [1, 2]. And let which is easily seen that χ ε (x) = χ(ε γ x), (3.1.40) supp(χ ε ) [0, 2ε γ ], supp(χ ε), supp(χ ε) [ε γ, 2ε γ ]. Here γ (0, 1) is a fixed number. Now we are able to define an approximation by U ε 2(x) = (1 χ ε (x))o ε 2(ξ) + χ ε (x)i ε 2(x). (3.1.41) By this method, we don t need to find the common part and the argument is simpler, however, the price we should pay is that U ε 2(x) does not satisfy equation (3.1.15) precisely any more. Instead, an error occurs: ε du ε 2(x) dx + U ε 2(x) x = χ ε(x))(i ε 2(ξ) O ε 2(x))ε 1 γ = O(ε 1 γ ). (3.1.42) 3.2 Second order ODEs and boundary layers The previous examples have given us some ideas about the method of asymptotic expansions. However, they can not outline all the features of this method since they are really too simple. In this section, we are going to study a more complex problem which possesses all aspects of asymptotic expansions. Consider the problem for second order ordinary differential equation: ε d2 u + (1 + ε)du + u dx2 dx = 0, in D. (3.2.1) u(0) = 0, u(1) = 1. (3.2.2)

25 First of all, we explain that problem (3.2.1) (3.2.2) is not regular. We set u ε (x) = u 0 (x) + εu 1 (x) + ε 2 u 2 (x) +. (3.2.3) Inserting into equation (3.2.1) and comparing the coefficients of ε i in both sides, one has Then we further get du 0 dx + u 0 = 0, (3.2.4) u 0 (0) = 0, u 0 (1) = 1. (3.2.5) du 1 dx + u 1 + d2 u 0 dx 2 + du 0 = 0, dx (3.2.6) u 1 (0) = 0. (3.2.7) du 2 dx + u 2 + d2 u 1 dx 2 + du 1 = 0, dx (3.2.8) u 2 (0) = 0. (3.2.9) Since equation (3.2.4) is of first order, only one of conditions (3.2.5) can be satisfied. The solution to (3.2.4) is u 0 (x) = C 0 e x. We shall see that even we require u 0 (x) meets only one condition in (3.2.5), there still is no asymptotic expansion of the form (3.2.3). There are two cases. Case i) Suppose that the condition at x = 0 is satisfied (we don t care at this moment another condition), then C 0 = 0, hence u 0 (x) = 0. Solving problems (3.2.6) (3.2.7) and (3.2.8) (3.2.9) one has u 1 (x) = u 2 (x) = 0. Thus no asymptotic expansion can be found. Case ii) Assume that the condition at x = 1 is satisfied, then C 0 = e, and u 0 (x) = e 1 x. Consequently, equations (3.2.6) and (3.2.8) become Hence, du 1 dx + u 1 = 0, du 2 dx + u 2 = 0. (3.2.10) u 1 (x) = C 1 e x, u 2 (x) = C 2 e x.

26 But from condition u 1 (1) = u 2 (1) = 0 which can be derived from ansatz (3.2.3), it follows that C 1 = C 2 = 0, whence u 1 (x) = u 2 (x) = 0, and the possible asymptotic expansion is U ε i (x) = e 1 x for any i = 0, 1, 2,. Note that U ε i (0) u ε (0) = e 0. Therefore, (3.2.1) (3.2.2) is singular Outer expansions This subsection is concerned with outer expansions. We begin with the definition of an ansatz u ε (x) = u 0 (ξ) + εu 1 (ξ) + ε 2 u 2 (ξ) +. (3.2.11) For simplicity of notations, we will denote the derivative of a one-variable function by, namely, f (x) = df, f (ξ) = df, etc. Inserting (3.2.11) into dx dξ equation (3.2.1) and equating the coefficients of ε i of both sides yield ε 0 : u 0 + u 0 = 0, u 0 (1) = 1, (3.2.12) ε 1 : u 1 + u 1 + u 0 + u 0 = 0, u 1 (1) = 0, (3.2.13) ε 2 : u 2 + u 2 + u 1 + u 1 = 0, u 2 (1) = 0. (3.2.14) The solutions to (3.2.12), (3.2.13) and (3.2.14) are respectively, u 0 (x) = e 1 x, u 1 (x) = u 2 (x) = 0. Thus outer approximations (up to i+1-terms) can be constructed as follows here i = 0, 1, 2. O ε i (x) = e 1 x, (3.2.15) Inner expansions The construction of an inner expansion is more complicated than that for an outer expansion. Firstly a correct scale should be decided, by using the rescaling technique.

27 Rescaling Introduce a new variable ξ = x δ, (3.2.16) where δ = δ(ε). In what follows, we shall prove that in order to get an inner expansion which matches well the outer expansion, δ should be very small, so ξ is called fast variable. The first goal of this subsection is to find a correct formula of δ. Rewriting equation (3.1.1) in terms of ξ gives ε d 2 U δ 2 dξ ε du δ dξ + U = 0. (3.2.17) To investigate the relation of the coefficients of (3.2.17), i.e. ε δ 2, 1 + ε, 1, δ there are five cases which should be taken into account. Note that since ε << ε δ 1 δ Case i) δ >> 1. Recalling that ε << 1, one has Thus equation (3.2.17) becomes ε d 2 U δ 2 dξ }{{ 2 } o(1) ε δ 2 << 1 δ 2 << ε δ ε du }{{ δ } o(1) << 1. dξ + }{{} U = 0, (3.2.18) O(1) so U = o(1). This large δ is not a correct scale. Case ii) δ 1. This implies ξ x, and (3.2.16) changes nothing. In the present case, only a regular expansion can be expected, so that is not what we want. Case iii) δ << 1 and ε >> 1. From which it follows that ε >> δ. Dividing δ 2 δ equation (3.2.17) by ε yields δ 2 d 2 U dξ ε ε δ du dξ } {{ } o(1) + δ2 ε U = 0, (3.2.19) }{{} o(1)

28 from which it follows that d 2 U dξ 2 = o(1). Thus this scale would not lead to an inner expansion either. Case iv) δ << 1 and ε δ 2 δ to get 1. We have ε δ. Multiplying equation (3.2.17) by δ ε d 2 U }{{} δ dξ du dξ + εdu dξ + δu = 0, (3.2.20) }{{} o(1) this will lead to a correct scale. We just choose the simple relation δ = ε, and (3.2.16) turns out to be ξ = x ε. Case v) δ << 1 and ε δ 2 (3.2.17) by δ we obtain which implies that du dξ ε δ }{{} o(1) << 1, which implies ε << δ. Multiplying equation δ d 2 U dξ + (1 + ε) 2 }{{} 1 du dξ + }{{} δu = 0, (3.2.21) o(1) = o(1). This case is not what we want. Now we turn back to construction of inner expansions. From rescaling we can define and an ansatz as follows ξ = x ε, (3.2.22) u ε (x) = U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) +. (3.2.23) It is easy to compute that for i = 0, 1, 2,, du i (ξ) dx = 1 ε du i (ξ), dξ d 2 U i (ξ) dx 2 = 1 ε 2 d 2 U i (ξ) dξ 2. Then invoking equation (3.1.1) we arrive at ε 1 : U 0 + U 0 = 0, (3.2.24) ε 0 : U 1 + U 1 + U 0 + U 0 = 0, (3.2.25) ε 1 : U 2 + U 2 + U 1 + U 1 = 0. (3.2.26)

29 From which we obtain the general solutions U 0 (ξ) = C 01 e ξ + C 02, (3.2.27) U 1 (ξ) = C 11 e ξ + C 12 C 02 ξ, (3.2.28) U 2 (ξ) = C 21 e ξ + C 22 + C 02 2 ξ2 C 12 ξ. (3.2.29) Here C ij with i = 0, 1, 2; j = 1, 2 are constants. Next step is to determine these constants. To this end, we use the condition at x = 0 which implies that ξ = 0 too, to conclude that U 0 (0) = 0, U 1 (0) = 0 and U 2 (0) = 0, thus C i1 = C i2 =: A i, (3.2.30) for i = 0, 1, 2. Hence, (3.2.27) (3.2.29) are reduced to U 0 (ξ) = A 0 (e ξ 1), (3.2.31) U 1 (ξ) = A 1 (e ξ 1) + A 0 ξ, (3.2.32) U 2 (ξ) = A 2 (e ξ 1) A 0 2 ξ2 + A 1 ξ. (3.2.33) Therefore we still need to find constants A i. For this purpose we need matching conditions. An inner region is near the boundary layer and is usually very thin, is of O(ε) in the present problem, while an outer region is far from the boundary layer. Thus there is an intermediate (or, matching, overlapping) region between them, the scale of the distance of this region to the boundary layer is of O(ε α ) where α (0, 1). Inner and outer expansions are valid (by this word we mean that an expansion satisfies well the associated equation) respectively, over inner and outer regions. Roughly speaking, matching conditions are the conditions which are given over the intermediate region so that the outer and inner expansions coincide there. The task of the next subsection is to find such conditions Matching conditions Now we expect reasonably that the inner expansions coincide with the outer ones in the intermediate region, and write U 0 (ξ) + εu 1 (ξ) + ε 2 U 2 (ξ) = u 0 (x) + εu 1 (x) + ε 2 u 2 (x) + O(ε 3 ). To derive the matching conditions, we shall employ two main methods.

30 Matching by expansions (Relation with the intermediate variable method? ) Following Fife [14], we rewrite x = εξ and expand the right hand side in terms of ξ. We then obtain the matching conditions U 0 (ξ) u 0 (0) = e, (3.2.34) U 1 (ξ) u 0(0)ξ + u 1 (0) = eξ, (3.2.35) U 2 (ξ) 1 2 u 0(0)ξ 2 + u 1(0)ξ + u 2 (0) = e 2 ξ2, (3.2.36) for ξ. From (3.2.31) it follows that U 0 (ξ) A 0, as ξ. Combination with (3.2.34) yields Hence, (3.2.31) (3.2.33) are now A 0 = e. (3.2.37) U 0 (ξ) = e(e ξ 1), (3.2.38) U 1 (ξ) = A 1 (e ξ 1) eξ, (3.2.39) U 2 (ξ) = A 2 (e ξ 1) + e 2 ξ2 + A 1 ξ. (3.2.40) So the leading term of inner expansion is obtained. Comparing (3.2.35) with (3.2.39) for large ξ we have A 1 = 0. (3.2.41) In a similar manner, from (3.2.36) with (3.2.40) one gets A 2 = 0. (3.2.42) Therefore, the first three term of the inner expansion are determined, which read U 0 (ξ) = e(1 e ξ ), (3.2.43) U 1 (ξ) = eξ, (3.2.44) U 2 (ξ) = e 2 ξ2. (3.2.45) Using these functions, we define approximations up to i + 1-terms (i = 0, 1, 2) as follows I ε 0(ξ) = e(1 e ξ ), (3.2.46) I ε 1(ξ) = e(1 e ξ ) εeξ, (3.2.47) I ε 2(ξ) = e(1 e ξ ) εeξ + ε 2 e 2 ξ2. (3.2.48)

31 Van Dyke s rule for matching Matching with an intermediate variable can be tiresome. The following Van Dyke s rule [33] for matching usually works and is more convenient. For a function f, we have corresponding inner and outer expansions which are denoted respectively, f = n εn f n (x) and f = n εn g n (ξ). We define Definition. Let P, Q be non-negative integers. E P f = outer limit(x fixed ε 0) retaining P + 1 terms of an outer expansion P = ε n f n (x), (3.2.49) and n=0 H Q f = inner limit(ξ fixed ε 0) retaining Q + 1 terms of an inner expansion Q = ε n g n (ξ), (3.2.50) n=0 Then the Van Dyke matching rule can be stated as E P H Q f = H Q E P f. Example. Let P = Q = 0. For our problem in this section, we define f = u ε, and H 0 g := A 0 (e ξ 1), E 0 f := e 1 x. Then and E 0 H 0 g = E 0 {A 0 (e ξ 1)} = E 0 {A 0 (e x/ε 1)} = A 0. (3.2.51) H 0 E 0 f = H 0 {e 1 x } = H 0 {e 1 εξ } = e. (3.2.52) By the Van Dyke rule, (3.2.51) must coincide with (3.2.52), and we obtain A 0 = e, which is (3.2.37). We can also derive the matching conditions of higher order.

32 3.2.4 Matched asymptotic expansions In this subsection we shall make use of the inner and outer expansions to construct approximations. Also we do it in two ways. i) The first method: Adding inner and outer expansions, then subtracting the common part, we obtain U ε 0(x) = e 1 x + e(1 e ξ ) e = e(e x e x ε ) U1(x) ε = e(e x e x ε ) ex ( eεξ) = e(e x e x ε ) U2(x) ε = e(e x e x 1 ε ) + 2 ex2 1 2 e(εξ)2 = e(e x e x ε ). (3.2.53) From which one asserts that U ε 0(x) = U ε 1(x) = U ε 2(x). The more terms we take, but the accuracy does not increase! This is different from what we had for algebraic equations. ii) The second method: Employing the cut-off function defined in previous subsection. Then we get Here, i = 0, 1, Examples U ε i (x) = (1 χ ε (x))o ε i (x) + χ ε (x)i ε i (ξ). (3.2.54) The following examples will help us to understand the method of asymptotic expansions. Example 1. In a given singular perturbation problem, more than one boundary layer can occur. This is exemplified by Here A 0, β 0. ε d2 u u dx2 = A, in D. (3.3.1) u(0) = α, u(1) = β. (3.3.2) Example 2. A problem can be singular although ε does not multiply the highest order derivative of the equation. A simple example is the following: 2 u x ε u 2 y = 0, in D = {(x, y) 0 < x < 1, 0 < y < y 0 }. (3.3.3) u(x, 0; ε) = f(x), for 0 x 1, (3.3.4) u(0, y; ε) = g 1 (y), u(1, y; ε) = g 2 (y), for 0 y y 0. (3.3.5)

33 Here we take y 0 > 0, and choose u 0 satisfying 2 u 0 x 2 u 0 (x, y; ε) = g 1 (y) + (g 2 (y) g 1 (y))x. = 0 as follows However, in general, u 0 (x, 0; ε) f(x), so that u 0 is not an approximation of u in D. This can be easily understood by noting that (3.3.3) is parabolic while it becomes elliptic if ε = 0. Example 3. In certain perturbation problems, there exists a uniquely defined function u 0 in D satisfying the limit equation L 0 u 0 = f 0 and all the boundary conditions imposed on u, and yet u 0 is not an approximation of u: (x + ε) 2 du + ε dx = 0, for 0 < x < A, (3.3.6) u(0; ε) = 1. (3.3.7) We have the exact solution to problem (3.3.6) (3.3.7): u = u(x; ε) = ε x + ε, and L 0 = x 2 d dx. The function u 0 = 1 satisfies L 0 u 0 = 0 and also the boundary conditions. But { 0 if x 0, lim u(x; ε) = (3.3.8) ε 0 1 if x = 0, and as ε 0. max u u 0 = A D A + ε 0 Example 4. Some operators L ε cannot be decomposed into an unperturbed ε-independent part and a perturbation. du ε exp( (u 1)/ε) = 0, for D = {0 < x < A}, A > 0, (3.3.9) dx u(0; ε) = 1 α, α > 0. (3.3.10) Note that { } lim max (ε exp( (g 1)/ε)) ε 0 D if and only if g > 1 for x D. Thus we do not have the decomposition with L 0 = d. Moreover, from the exact dx u(x; ε) = 1 + ε log(x + exp( α/ε)), we assert easily that none of the usually successful methods produces an approximation of u in D. = 0

34 Chapter 4 Partial differential equations 4.1 Regular problem Let Ω be an open bounded domain in R n with smooth boundary Ω, here n N. Consider We shall prove this is a regular problem. u + εu = f 0 + εf 1, (4.1.1) u Ω = 0. (4.1.2) 4.2 Conservation laws and vanishing viscosity method In this section we will study the inviscid limit of scalar conservation laws with viscosity. The associated inviscid problem is u t + (F (u)) x = νu xx, (4.2.1) u t=0 = u 0. (4.2.2) u t + (F (u)) x = 0, (4.2.3) u t=0 = u 0. (4.2.4) It is a basic question that does the solution of (4.2.1) (4.2.2) converge to that of (4.2.3) (4.2.4)? This is the main problem for the method of vanishing viscosity. In this section we are going to prove that the answer to this question is positive under some suitable assumptions. We shall make use of the method of matched asymptotic expansions and L 2 -energy method. 29

35 4.2.1 Construction of approximate solutions Outer and inner expansions Matching conditions and approximations Convergence

36 Chapter 5 An application to optimal control theory 5.1 Introduction Optimal control for hyperbolic conservation laws requires a considerable analytical effort and computationally expansive in practice, is thus a difficult topic. Some methods have been developed in the last years to reduce the computational cost and to render this type of problems affordable. In particular, recently the authors of [11] have developed an alternating descent method that takes into account the possible shock discontinuities, for the optimal control of the inviscid Burgers equation in one space dimension. Further in [12] the vanishing viscosity method is employed to study this alternating descent method for the Burgers equation, with the aid of the Hopf-Cole formula which can be found in [23, 36], for instance. Most results in [12] are formal. In the present chapter we will revisit this alternating descent method in the context of one dimensional viscous scalar conservation laws with a general nonlinearity. The vanishing viscosity method and the method of matched asymptotic expansions will be applied to study this optimal control problem and justify rigorously the results. To be more precise, we state the optimal problem as follows. For a given T > 0, we study the following inviscid problem u t + (F (u)) x = 0, in R (0, T ); (5.1.1) u(x, 0) = u I (x), x R. (5.1.2) Here, F : R R, u F (u) is a smooth function, and f denotes its derivative in the following context. The case that F (u) = u2 is studied in e.g. [11, 12]. 2 Given a target u D L 2 (R) we consider the cost functional to be minimized 31

37 J : L 1 (R) R, defined by J(u I ) = R u(x, T ) u D (x) 2 dx, where u(x, t) is the unique entropy solution to problem (5.1.1) (5.1.2). We also introduce the set of admissible initial data U ad L 1 (R), that we shall define later in order to guarantee the existence of the following optimization problem: Find u I,min U ad such that J(u I,min ) = min u I U ad J(u I ). This is one of the model optimization problems that is often addressed in the context of optimal aerodynamic design, the so-called inverse design problem, see e.g. [18]. The existence of minimizers has been proved in [11]. From a practical point of view it is however more important to be able to develop efficient algorithms for computing accurate approximations of discrete minimizers. The most efficient methods to approximate minimizers are the gradient methods. But for large complex systems, as Euler equations in higher dimensions, the existing most efficient numerical schemes (upwind, Godunov, etc.) are not differentiable. In this case, the gradient of the functional is not well defined and there is not a natural and systematic way to compute its variations. Due to this difficulty, it would be natural to explore the possible use of non-smooth optimization techniques. The following two approaches have been developed: The first one is based on automatic differentiation, and the second one is the so-called continuous method consisting of two steps as follows: One first linearizes the continuous system (5.1.1) to obtain a descent direction of the continuous functional J, then takes a numerical approximation of this descent direction with the discrete values provided by the numerical scheme. However this continuous method has to face another major drawback when solutions develop shock discontinuities, as it is the case in the context of the hyperbolic conservation laws like (5.1.1) we are considering here. The formal differentiation of the continuous states equation (5.1.1) yields t (δu) + x (f(u)δu) = 0, in R (0, T ); (5.1.3) But this is only justified when the state u on which the variations are being computed, is smooth enough. In particular, it is not justified when the solutions are discontinuous since singular terms may appear on the linearization over the shock location. Accordingly in optimal control applications we also

38 need to take into account the sensitivity for the shock location (which has been studied by many authors, see, e.g. [9, 19, 32]). Roughly speaking, the main conclusion of that analysis is that the classical linearized system for the variation of the solutions must be complemented with some new equations for the sensitivity of the shock position. To overcome this difficulty, we naturally think of another way, namely, the vanishing viscosity method (as in [12], in which an optimal control problem for the Burgers equation is studied) and add an artificial viscosity term to smooth the state equation. Equations (5.1.1) with smoothed initial datum then turns out to be u t + (F (u)) x = νu xx, in R (0, T ), (5.1.4) u t=0 = g ε. (5.1.5) Note that the Cauchy problem (5.1.4) (5.1.5) is of parabolic type, thus from the standard theory of parabolic equations (see, for instance, Ladyzenskaya et al [27]) we have that the solution u ν,ε of this problem is smooth. So the linearized one of eq. (5.1.4) can be derived easily, which reads (δu) t + (f(u)δu) x = ν(δu) xx, in R (0, T ), (5.1.6) δu t=0 = h ε. (5.1.7) Here ν, ε are positive constants, δu denotes the variation of u. The initial data g ε, h ε will be chosen suitably in Section 3, so that the perturbations of initial data and shock position are taken into account, this renders us that we can select the alternating descent directions in the case of viscous conservation laws. To solve the optimal control problem, we also need the following adjoint problem, which reads p t f(u)p x = 0, in R (0, T ); (5.1.8) p(x, T ) = p T (x), x R, (5.1.9) here, p T (x) = u(x, T ) u D (x). And we smooth equation (5.1.8) and initial data as follows p t f(u)p x = νp xx, in R (0, T ); (5.1.10) p(x, T ) = p T ε (x), x R, (5.1.11) Since solutions u = u(x, t; ν, ε), δu = δu(x, t; ν, ε) are smooth, shocks vanish, instead quasi-shock regions are formed. Natural questions are arising as

39 follows: 1) How should ν, ε go to zero, more precisely, can ν, ε go to zero independently? Which one goes to zero faster, or the same? What happen if the two parameters ν, ε 0? 2) What are the limit of equations (5.1.10), (5.1.6) and (5.1.4) respectively? 3) To solve the optimal control problem correctly, the states of system (5.1.3) should be understood as a pair (δu, δφ), where δφ is the variation of shock position. As ν, ε 0, is there an equation for δφ which determines the evolution of δφ and complements equation (5.1.3)? To answer these questions, we shall make use of the method of matched asymptotic expansions. Our main results are: the parameters ν, ε must satisfy ε = σν, where σ is a given positive constant. This means that ν, ε must go to zero at the same order, but speeds may be different. We write ε = σ. Then we see ν that if σ > 1, ν goes to zero faster than ε, and vice versa. We now fix ε which is assumed to be very small. As σ, namely ν 0, the equation of variation of shock position differs from the one derived by Bressan and Marson [9], etc., by a term which converges to zero as σ tends to infinity, however may be very large if σ is small enough. Thus we conclude that 1) The equation derived by Bressan and Marson is suitable for developing the numerical scheme when σ is sufficiently large. In this case, the perturbation of initial data plays a dominant role and the effect due to the artificial viscosity can be omitted; 2) However, if σ is small, then the effect of viscosity must be taken into account while the perturbation of initial data can be neglected, and a corrector should be added. We shall prove that the solutions to problem (5.1.4) (5.1.5), and problem (5.1.8) and (5.1.11) converge, respectively, to the entropy solution and the reversible solution of the corresponding inviscid problems, while the solution to problem (5.1.6) (5.1.7) converges to the one that solves (5.1.3) in the sub-regions away from the shock, and is complemented by an equation which governs the evolution of the variation of shock position. Furthermore, using the method of asymptotic expansions we also clarify some formal expansions used frequently in the theory of optimal control, that they are valid only away from shock and when some parameter is not too small. For example, for solution u ν to problem (5.1.4) (5.1.5) we normally expand it as u ν = u + νδu ν + O(ν 2 ), (5.1.12) where u is usually believed to be the entropy solution to problem (5.1.1) (5.1.2), and δu is the variation of u ν, the solution to (5.1.6) (5.1.7). However (5.1.12) is not correct near shock provided that δu ν (x, 0) is bounded,

2 Matched asymptotic expansions.

2 Matched asymptotic expansions. 2 Matched asymptotic expansions. In this section we develop a technique used in solving problems which exhibit boundarylayer phenomena. The technique is known as method of matched asymptotic expansions

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

Application of the perturbation iteration method to boundary layer type problems

Application of the perturbation iteration method to boundary layer type problems DOI 10.1186/s40064-016-1859-4 RESEARCH Open Access Application of the perturbation iteration method to boundary layer type problems Mehmet Pakdemirli * *Correspondence: mpak@cbu.edu.tr Applied Mathematics

More information

Notes for Expansions/Series and Differential Equations

Notes for Expansions/Series and Differential Equations Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated

More information

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0, 4 Rescaling In this section we ll look at one of the reasons that our ε = 0 system might not have enough solutions, and introduce a tool that is fundamental to all perturbation systems. We ll start with

More information

Perturbation Methods

Perturbation Methods Perturbation Methods GM01 Dr. Helen J. Wilson Autumn Term 2008 2 1 Introduction 1.1 What are perturbation methods? Many physical processes are described by equations which cannot be solved analytically.

More information

Regularity for Poisson Equation

Regularity for Poisson Equation Regularity for Poisson Equation OcMountain Daylight Time. 4, 20 Intuitively, the solution u to the Poisson equation u= f () should have better regularity than the right hand side f. In particular one expects

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods Introduction In this workshop we will introduce you to the least-squares spectral element method. As you can see from the lecture notes, this method is a combination of the weak formulation derived from

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

n v molecules will pass per unit time through the area from left to

n v molecules will pass per unit time through the area from left to 3 iscosity and Heat Conduction in Gas Dynamics Equations of One-Dimensional Gas Flow The dissipative processes - viscosity (internal friction) and heat conduction - are connected with existence of molecular

More information

Optimal Control and Vanishing Viscosity for the Burgers Equation

Optimal Control and Vanishing Viscosity for the Burgers Equation 7 Optimal Control and Vanishing Viscosity for the Burgers Equation C. Castro, F. Palacios, and E. Zuazua 3 Universidad Politécnica de Madrid, Spain; carlos.castro@upm.es Universidad Politécnica de Madrid,

More information

Switching, sparse and averaged control

Switching, sparse and averaged control Switching, sparse and averaged control Enrique Zuazua Ikerbasque & BCAM Basque Center for Applied Mathematics Bilbao - Basque Country- Spain zuazua@bcamath.org http://www.bcamath.org/zuazua/ WG-BCAM, February

More information

ASYMPTOTIC METHODS IN MECHANICS

ASYMPTOTIC METHODS IN MECHANICS ASYMPTOTIC METHODS IN MECHANICS by I. Argatov and G. Mishuris Aberystwyth University 2011 Contents Introduction............................... 4 1 Asymptotic expansions 5 1.1 Introduction to asymptotic

More information

Travelling waves. Chapter 8. 1 Introduction

Travelling waves. Chapter 8. 1 Introduction Chapter 8 Travelling waves 1 Introduction One of the cornerstones in the study of both linear and nonlinear PDEs is the wave propagation. A wave is a recognizable signal which is transferred from one part

More information

The Liapunov Method for Determining Stability (DRAFT)

The Liapunov Method for Determining Stability (DRAFT) 44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level

More information

7.5 Partial Fractions and Integration

7.5 Partial Fractions and Integration 650 CHPTER 7. DVNCED INTEGRTION TECHNIQUES 7.5 Partial Fractions and Integration In this section we are interested in techniques for computing integrals of the form P(x) dx, (7.49) Q(x) where P(x) and

More information

On the well-posedness of the Prandtl boundary layer equation

On the well-posedness of the Prandtl boundary layer equation On the well-posedness of the Prandtl boundary layer equation Vlad Vicol Department of Mathematics, The University of Chicago Incompressible Fluids, Turbulence and Mixing In honor of Peter Constantin s

More information

Non-linear Scalar Equations

Non-linear Scalar Equations Non-linear Scalar Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro August 24, 2014 1 / 44 Overview Here

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Lattice Boltzmann Method

Lattice Boltzmann Method 3 Lattice Boltzmann Method 3.1 Introduction The lattice Boltzmann method is a discrete computational method based upon the lattice gas automata - a simplified, fictitious molecular model. It consists of

More information

Lecture 2: A Strange Term Coming From Nowhere

Lecture 2: A Strange Term Coming From Nowhere Lecture 2: A Strange Term Coming From Nowhere Christophe Prange February 9, 2016 In this lecture, we consider the Poisson equation with homogeneous Dirichlet boundary conditions { u ε = f, x ε, u ε = 0,

More information

AN ALTERNATING DESCENT METHOD FOR THE OPTIMAL CONTROL OF THE INVISCID BURGERS EQUATION IN THE PRESENCE OF SHOCKS

AN ALTERNATING DESCENT METHOD FOR THE OPTIMAL CONTROL OF THE INVISCID BURGERS EQUATION IN THE PRESENCE OF SHOCKS December 7, 2007 10:10 WSPC/INSTRUCTION FILE Castro- Palacios- Mathematical Models and Methods in Applied Sciences c World Scientific Publishing Company AN ALTERNATING DESCENT METHOD FOR THE OPTIMAL CONTROL

More information

DYNAMICAL SYSTEMS

DYNAMICAL SYSTEMS 0.42 DYNAMICAL SYSTEMS Week Lecture Notes. What is a dynamical system? Probably the best way to begin this discussion is with arguably a most general and yet least helpful statement: Definition. A dynamical

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

The Euler Equation of Gas-Dynamics

The Euler Equation of Gas-Dynamics The Euler Equation of Gas-Dynamics A. Mignone October 24, 217 In this lecture we study some properties of the Euler equations of gasdynamics, + (u) = ( ) u + u u + p = a p + u p + γp u = where, p and u

More information

8.6 Partial Fraction Decomposition

8.6 Partial Fraction Decomposition 628 Systems of Equations and Matrices 8.6 Partial Fraction Decomposition This section uses systems of linear equations to rewrite rational functions in a form more palatable to Calculus students. In College

More information

Numerical methods for the Navier- Stokes equations

Numerical methods for the Navier- Stokes equations Numerical methods for the Navier- Stokes equations Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Dec 6, 2012 Note:

More information

7.1 Another way to find scalings: breakdown of ordering

7.1 Another way to find scalings: breakdown of ordering 7 More matching! Last lecture we looked at matched asymptotic expansions in the situation where we found all the possible underlying scalings first, located where to put the boundary later from the direction

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Perturbation theory for anharmonic oscillations

Perturbation theory for anharmonic oscillations Perturbation theory for anharmonic oscillations Lecture notes by Sergei Winitzki June 12, 2006 Contents 1 Introduction 1 2 What is perturbation theory 1 21 A first example 1 22 Limits of applicability

More information

Last Update: March 1 2, 201 0

Last Update: March 1 2, 201 0 M ath 2 0 1 E S 1 W inter 2 0 1 0 Last Update: March 1 2, 201 0 S eries S olutions of Differential Equations Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections

More information

SHOCK WAVES FOR RADIATIVE HYPERBOLIC ELLIPTIC SYSTEMS

SHOCK WAVES FOR RADIATIVE HYPERBOLIC ELLIPTIC SYSTEMS SHOCK WAVES FOR RADIATIVE HYPERBOLIC ELLIPTIC SYSTEMS CORRADO LATTANZIO, CORRADO MASCIA, AND DENIS SERRE Abstract. The present paper deals with the following hyperbolic elliptic coupled system, modelling

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

MATH 1A, Complete Lecture Notes. Fedor Duzhin

MATH 1A, Complete Lecture Notes. Fedor Duzhin MATH 1A, Complete Lecture Notes Fedor Duzhin 2007 Contents I Limit 6 1 Sets and Functions 7 1.1 Sets................................. 7 1.2 Functions.............................. 8 1.3 How to define a

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots, Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical

More information

Consistent Histories. Chapter Chain Operators and Weights

Consistent Histories. Chapter Chain Operators and Weights Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state

More information

Multiplication of Generalized Functions: Introduction

Multiplication of Generalized Functions: Introduction Bulg. J. Phys. 42 (2015) 93 98 Multiplication of Generalized Functions: Introduction Ch. Ya. Christov This Introduction was written in 1989 to the book by Ch. Ya. Christov and B. P. Damianov titled Multiplication

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

MA22S3 Summary Sheet: Ordinary Differential Equations

MA22S3 Summary Sheet: Ordinary Differential Equations MA22S3 Summary Sheet: Ordinary Differential Equations December 14, 2017 Kreyszig s textbook is a suitable guide for this part of the module. Contents 1 Terminology 1 2 First order separable 2 2.1 Separable

More information

Équation de Burgers avec particule ponctuelle

Équation de Burgers avec particule ponctuelle Équation de Burgers avec particule ponctuelle Nicolas Seguin Laboratoire J.-L. Lions, UPMC Paris 6, France 7 juin 2010 En collaboration avec B. Andreianov, F. Lagoutière et T. Takahashi Nicolas Seguin

More information

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables. Chapter 2 First order PDE 2.1 How and Why First order PDE appear? 2.1.1 Physical origins Conservation laws form one of the two fundamental parts of any mathematical model of Continuum Mechanics. These

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

Analysis II - few selective results

Analysis II - few selective results Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem

More information

Foundations of Analysis. Joseph L. Taylor. University of Utah

Foundations of Analysis. Joseph L. Taylor. University of Utah Foundations of Analysis Joseph L. Taylor University of Utah Contents Preface vii Chapter 1. The Real Numbers 1 1.1. Sets and Functions 2 1.2. The Natural Numbers 8 1.3. Integers and Rational Numbers 16

More information

Taylor and Laurent Series

Taylor and Laurent Series Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Marching on the BL equations

Marching on the BL equations Marching on the BL equations Harvey S. H. Lam February 10, 2004 Abstract The assumption is that you are competent in Matlab or Mathematica. White s 4-7 starting on page 275 shows us that all generic boundary

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Philippe B. Laval KSU Current Semester Philippe B. Laval (KSU) Key Concepts Current Semester 1 / 25 Introduction The purpose of this section is to define

More information

Applied Asymptotic Analysis

Applied Asymptotic Analysis Applied Asymptotic Analysis Peter D. Miller Graduate Studies in Mathematics Volume 75 American Mathematical Society Providence, Rhode Island Preface xiii Part 1. Fundamentals Chapter 0. Themes of Asymptotic

More information

An Exponentially Fitted Non Symmetric Finite Difference Method for Singular Perturbation Problems

An Exponentially Fitted Non Symmetric Finite Difference Method for Singular Perturbation Problems An Exponentially Fitted Non Symmetric Finite Difference Method for Singular Perturbation Problems GBSL SOUJANYA National Institute of Technology Department of Mathematics Warangal INDIA gbslsoujanya@gmail.com

More information

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): Lecture 5 Series solutions to DEs Relevant sections from AMATH 35 Course Notes (Wainwright):.4. Relevant sections from AMATH 35 Course Notes (Poulin and Ingalls): 2.-2.3 As mentioned earlier in this course,

More information

2. Introduction to commutative rings (continued)

2. Introduction to commutative rings (continued) 2. Introduction to commutative rings (continued) 2.1. New examples of commutative rings. Recall that in the first lecture we defined the notions of commutative rings and field and gave some examples of

More information

Introduction to Aspects of Multiscale Modeling as Applied to Porous Media

Introduction to Aspects of Multiscale Modeling as Applied to Porous Media Introduction to Aspects of Multiscale Modeling as Applied to Porous Media Part III Todd Arbogast Department of Mathematics and Center for Subsurface Modeling, Institute for Computational Engineering and

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Euler Equations: local existence

Euler Equations: local existence Euler Equations: local existence Mat 529, Lesson 2. 1 Active scalars formulation We start with a lemma. Lemma 1. Assume that w is a magnetization variable, i.e. t w + u w + ( u) w = 0. If u = Pw then u

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

Math (P)Review Part II:

Math (P)Review Part II: Math (P)Review Part II: Vector Calculus Computer Graphics Assignment 0.5 (Out today!) Same story as last homework; second part on vector calculus. Slightly fewer questions Last Time: Linear Algebra Touched

More information

Turbulence Modeling I!

Turbulence Modeling I! Outline! Turbulence Modeling I! Grétar Tryggvason! Spring 2010! Why turbulence modeling! Reynolds Averaged Numerical Simulations! Zero and One equation models! Two equations models! Model predictions!

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Quantitative Techniques (Finance) 203. Polynomial Functions

Quantitative Techniques (Finance) 203. Polynomial Functions Quantitative Techniques (Finance) 03 Polynomial Functions Felix Chan October 006 Introduction This topic discusses the properties and the applications of polynomial functions, specifically, linear and

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs)

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs) 13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs) A prototypical problem we will discuss in detail is the 1D diffusion equation u t = Du xx < x < l, t > finite-length rod u(x,

More information

p + µ 2 u =0= σ (1) and

p + µ 2 u =0= σ (1) and Master SdI mention M2FA, Fluid Mechanics program Hydrodynamics. P.-Y. Lagrée and S. Zaleski. Test December 4, 2009 All documentation is authorized except for reference [1]. Internet access is not allowed.

More information

Existence Theory: Green s Functions

Existence Theory: Green s Functions Chapter 5 Existence Theory: Green s Functions In this chapter we describe a method for constructing a Green s Function The method outlined is formal (not rigorous) When we find a solution to a PDE by constructing

More information

Tutorial on obtaining Taylor Series Approximations without differentiation

Tutorial on obtaining Taylor Series Approximations without differentiation Tutorial on obtaining Taylor Series Approximations without differentiation Professor Henry Greenside February 2, 2018 1 Overview An important mathematical technique that is used many times in physics,

More information

SOME VIEWS ON GLOBAL REGULARITY OF THE THIN FILM EQUATION

SOME VIEWS ON GLOBAL REGULARITY OF THE THIN FILM EQUATION SOME VIEWS ON GLOBAL REGULARITY OF THE THIN FILM EQUATION STAN PALASEK Abstract. We introduce the thin film equation and the problem of proving positivity and global regularity on a periodic domain with

More information

One Dimensional Dynamical Systems

One Dimensional Dynamical Systems 16 CHAPTER 2 One Dimensional Dynamical Systems We begin by analyzing some dynamical systems with one-dimensional phase spaces, and in particular their bifurcations. All equations in this Chapter are scalar

More information

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................

More information

Partial Fraction Decomposition

Partial Fraction Decomposition Partial Fraction Decomposition As algebra students we have learned how to add and subtract fractions such as the one show below, but we probably have not been taught how to break the answer back apart

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Partial differential equations

Partial differential equations Partial differential equations Many problems in science involve the evolution of quantities not only in time but also in space (this is the most common situation)! We will call partial differential equation

More information

Introduction LECTURE 1

Introduction LECTURE 1 LECTURE 1 Introduction The source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of seemingly great generality is in

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Global well-posedness of the primitive equations of oceanic and atmospheric dynamics

Global well-posedness of the primitive equations of oceanic and atmospheric dynamics Global well-posedness of the primitive equations of oceanic and atmospheric dynamics Jinkai Li Department of Mathematics The Chinese University of Hong Kong Dynamics of Small Scales in Fluids ICERM, Feb

More information

Salmon: Lectures on partial differential equations

Salmon: Lectures on partial differential equations 4 Burger s equation In Lecture 2 we remarked that if the coefficients in u x, y,! "! "x + v x,y,! "! "y = 0 depend not only on x,y but also on!, then the characteristics may cross and the solutions become

More information

Homogenization and error estimates of free boundary velocities in periodic media

Homogenization and error estimates of free boundary velocities in periodic media Homogenization and error estimates of free boundary velocities in periodic media Inwon C. Kim October 7, 2011 Abstract In this note I describe a recent result ([14]-[15]) on homogenization and error estimates

More information

Burgers equation - a first look at fluid mechanics and non-linear partial differential equations

Burgers equation - a first look at fluid mechanics and non-linear partial differential equations Burgers equation - a first look at fluid mechanics and non-linear partial differential equations In this assignment you will solve Burgers equation, which is useo model for example gas dynamics anraffic

More information

2 A Model, Harmonic Map, Problem

2 A Model, Harmonic Map, Problem ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or

More information

Topics in Fluid Dynamics: Classical physics and recent mathematics

Topics in Fluid Dynamics: Classical physics and recent mathematics Topics in Fluid Dynamics: Classical physics and recent mathematics Toan T. Nguyen 1,2 Penn State University Graduate Student Seminar @ PSU Jan 18th, 2018 1 Homepage: http://math.psu.edu/nguyen 2 Math blog:

More information

2.3 The Turbulent Flat Plate Boundary Layer

2.3 The Turbulent Flat Plate Boundary Layer Canonical Turbulent Flows 19 2.3 The Turbulent Flat Plate Boundary Layer The turbulent flat plate boundary layer (BL) is a particular case of the general class of flows known as boundary layer flows. The

More information

Convective Heat and Mass Transfer Prof. A. W. Date Department of Mechanical Engineering Indian Institute of Technology, Bombay

Convective Heat and Mass Transfer Prof. A. W. Date Department of Mechanical Engineering Indian Institute of Technology, Bombay Convective Heat and Mass Transfer Prof. A. W. Date Department of Mechanical Engineering Indian Institute of Technology, Bombay Module No.# 01 Lecture No. # 41 Natural Convection BLs So far we have considered

More information

Fundamentals of Fluid Dynamics: Ideal Flow Theory & Basic Aerodynamics

Fundamentals of Fluid Dynamics: Ideal Flow Theory & Basic Aerodynamics Fundamentals of Fluid Dynamics: Ideal Flow Theory & Basic Aerodynamics Introductory Course on Multiphysics Modelling TOMASZ G. ZIELIŃSKI (after: D.J. ACHESON s Elementary Fluid Dynamics ) bluebox.ippt.pan.pl/

More information

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for Life beyond Newton Notes for 2016-04-08 Newton s method has many attractive properties, particularly when we combine it with a globalization strategy. Unfortunately, Newton steps are not cheap. At each

More information

5.4 Continuity: Preliminary Notions

5.4 Continuity: Preliminary Notions 5.4. CONTINUITY: PRELIMINARY NOTIONS 181 5.4 Continuity: Preliminary Notions 5.4.1 Definitions The American Heritage Dictionary of the English Language defines continuity as an uninterrupted succession,

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Chapter III. Unconstrained Univariate Optimization

Chapter III. Unconstrained Univariate Optimization 1 Chapter III Unconstrained Univariate Optimization Introduction Interval Elimination Methods Polynomial Approximation Methods Newton s Method Quasi-Newton Methods 1 INTRODUCTION 2 1 Introduction Univariate

More information

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1 NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to

More information

Physics 342 Lecture 23. Radial Separation. Lecture 23. Physics 342 Quantum Mechanics I

Physics 342 Lecture 23. Radial Separation. Lecture 23. Physics 342 Quantum Mechanics I Physics 342 Lecture 23 Radial Separation Lecture 23 Physics 342 Quantum Mechanics I Friday, March 26th, 2010 We begin our spherical solutions with the simplest possible case zero potential. Aside from

More information

An analogy from Calculus: limits

An analogy from Calculus: limits COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm

More information