ANOMALOUS SHOCK DISPLACEMENT PROBABILITIES FOR A PERTURBED SCALAR CONSERVATION LAW

Size: px
Start display at page:

Download "ANOMALOUS SHOCK DISPLACEMENT PROBABILITIES FOR A PERTURBED SCALAR CONSERVATION LAW"

Transcription

1 ANOMALOUS SHOCK DISPLACEMENT PROBABILITIES FOR A PERTURBED SCALAR CONSERVATION LAW JOSSELIN GARNIER, GEORGE PAPANICOLAOU, AND TZU-WEI YANG Abstract. We consider an one-dimensional conservation law with random space-time forcing and calculate using large deviations the eponentially small probabilities of anomalous shock profile displacements. Under suitable hypotheses on the spatial support and structure of random forces, we analyze the scaling behavior of the rate function, which is the eponential decay rate of the displacement probabilities. For small displacements we show that the rate function is bounded above and below by the square of the displacement divided by time. For large displacements the corresponding bounds for the rate function are proportional to the displacement. We calculate numerically the rate function under different conditions and show that the theoretical analysis of scaling behavior is confirmed. We also apply a large-deviation-based importance sampling Monte Carlo strategy to estimate the displacement probabilities. We use a biased distribution centered on the forcing that gives the most probable transition path for the anomalous shock profile, which is the minimizer of the rate function. The numerical simulations indicate that this strategy is much more effective and robust than basic Monte Carlo. Key words. conservation laws, shock profiles, large deviations, Monte Carlo methods, importance sampling AMS subject classifications. 6F, 35L65, 35L67, 65C5. Introduction. It is well known that nonlinear waves are not very sensitive to perturbations in initial conditions or ambient medium inhomogeneities. This is in contrast to linear waves in random media where even weak inhomogeneities can affect significantly wave propagation over long times and distances. It is natural, therefore to consider perturbations of shock profiles of randomly forced conservation laws as rare events and use large deviations theory. The purpose of this paper is to calculate probabilities of anomalous shock profile displacements for randomly perturbed one-dimensional conservation laws. We analyze the rate function that characterizes the eponential decay of displacement probabilities and show that under suitable hypotheses on the random forcing they have scaling behavior relative to the size of the displacement and the time interval on which it occurs. The theory of large deviations for conservation laws with random forcing is an etension of the Freidlin-Wentzell theory of large deviations [, 7] to partial differential equations. This has been carried out etensively [,, 8] and we use this theory here. We are interested in a more detailed analysis of the eponential probabilities of anomalous shock profile displacements, which leads to an analysis of the rate function associated with large deviations for this particular class of rare events. We derive upper and lower bounds for the eponential decay rate of the small probabilities using suitable test functions for the variational problem involving the rate function. This is the main result of this paper. The applications we have in mind come from uncertainty quantification [3] in connection with simplified models of flow and combustion in a scramjet. Numerical calculations of the eponential decay rates from variational principles associated with the rate function have been carried out before [9, 6]. We Laboratoire de Probabilités et Modèles Aléatoires & Laboratoire Jacques-Louis Lions, Université Paris VII (garnier@math.univ-paris-diderot.fr) Mathematics Department, Stanford University (papanicolaou@stanford.edu) Institute for Computational and Mathematical Engineering (ICME), Stanford University (twyang@stanford.edu)

2 carry out such numerical calculations here and confirm the scaling behavior of bounds obtained theoretically. We use a gradient descent method to do the optimization numerically and we note that its convergence is quite robust even though the functional under consideration is not known to be conve. This robustness suggests the Monte Carlo simulations with importance sampling using a change of measure based on the minimizer of the discrete rate function is likely to effective. Our simulations show that indeed such importance sampling Monte Carlo performs much better than the basic Monte Carlo method. The paper is organized as follows. In Section we formulate the one-dimensional conservation law problem. In Section 3 we state the large deviation principle and identify the rate function which we will use. In Section 4 we give a simple, eplicitly computable case of shock profile displacement probabilities that can be used to compare with the results of the large deviation theory. Section 5 contains the main results of the paper, which as the upper and lower bounds for the eponential decay rate of the displacement probabilities, under different conditions on the random forcing. We identify scaling behavior of these probabilities, relative to size of the displacement and the time interval of interest. In Section 6 we introduce a discrete form for the conservation law and the associated large deviations, and calculate numerically the displacement probabilities from the discrete variational principle. In Section 7 we implement importance sampling Monte Carlo based on the minimizer of the discrete rate function and we compare it with the basic Monte Carlo method. We end with a brief section summarizing the paper and our conclusions.. The perturbed conservation law. We consider the scalar viscous conservation law: u t + (F (u)) = (Du ), R, t [, ), (.) u(t =, ) = u (), R. (.) Here F C (R) and the initial condition satisfies u () u ± as ± where u < u +. We are interested in traveling wave solutions of the form U( γt), where the C profile U satisfies U() u ± as ± and the wave speed γ is given by the Rankine-Hugoniot condition γ = F (u +) F (u ) u + u. (.3) A traveling wave U with wave speed γ eists provided the following conditions are fulfilled: F (u) F (u ) > γ(u u ), u (u +, u ), (.4) F (u + ) < γ < F (u ). (.5) The first condition is the Oleinik entropy condition [4] and the second one is the La entropy condition [6]. Under these conditions the traveling wave profile eists, it is the solution of the ordinary differential equation U = D ( F (U) F (u ) γ(u u ) ), U() u, and it is orbitally stable, which means that perturbations of the profile decay in time, and thus initial conditions near the traveling wave profile converge to it. Note that

3 a physically admissible viscous profile must have the stability property; otherwise, it would not be observable. As noted in [5], the motivating idea behind the orbital stability result is that in the stabilizing process, information is transferred from spatial decay of the profile U at infinity to temporal decay of the perturbation. The purpose of this paper is to address another type of stability, that is, the stability with respect to eternal noise. We consider the perturbed scalar viscous conservation law with additive noise: u ε t + (F (u ε )) = (Du ε ) + εẇ (t, ), R, t [, ), (.6) u ε (t =, ) = U(), R. (.7) Here ε is a small parameter, W (t, ) is a zero-mean random process (described below), and the dot stands for the time derivative. We would like to address the stability of the traveling wave U( γt) driven by the noise εẇ. Motivated by an application modeling combustion in a scramjet [3, 5], we have in mind a specific rare event, which is an eceptional or anomalous shift of the position of the traveling wave compared to the unperturbed motion with the constant velocity γ. We consider mild solutions which satisfy (denoting u ε (t) = (u ε (t, )) R ) u ε (t) = S(t)U S(t s)n(u ε (s))ds + ε where S(t) is the heat semi-group with kernel S(t,, y) = ( ep πdt ( ) y), Dt 3 S(t s)dw (s), (.8) and N(u)() = (F (u())). The main result about the heat kernel [5, Chap. XVI, Sec. 3] is as follows. Lemma.. If f = (f(t)) t [,T ] L ([, T ], H (R)), then the function S(t s)f(s)ds is in L ([, T ], H (R)) C([, T ], L (R)). A white noise or cylindrical Wiener process B(t, ) in the Hilbert space L (R) is such that for any complete orthonormal system (f n ()) n of L (R), there eists a sequence of independent Brownian motions (β n (t)) n such that B(t, ) = β n (t)f n (). (.9) n= B(t, ) can be seen as the (formal) spatial derivative of the Brownian sheet on [, ) R, which means that it is the Gaussian process with mean zero and covariance E[B(t, )B(s, y)] = s t δ( y). Note that the sum (.9) does not converge in L but in any Hilbert space H such that the embedding from L (R) to H is Hilbert- Schmidt. Therefore, the image of the process B by a linear mapping on L (R) is a well-defined process in the Sobolev space H k (R) (for k =, we take the convention H = L ) when the mapping is Hilbert-Schmidt from L (R) into H k (R). Then, if the kernel Φ is Hilbert-Schmidt in the sense that Φf n H k (R) = n= k Φ(, j ) L (R R) <, j=

4 4 then the random process W = ΦB, W (t, ) = Φ(, )B(t, )d is well-defined in H k (R). It is a zero-mean Gaussian process with covariance function given by E[W (t, )W (t, )] = t t C(, ), C(, ) = ΦΦ T (, ) = Φ(, )Φ(, )d, where Φ T stands for the adjoint of Φ. As an eample, we may think at Φ(, ) = Φ ()Φ ( ) with Φ H k (R) and Φ H k (R). In this case Φ characterizes the spatial support of the additive noise and Φ characterizes its local correlation function. In the limit Φ () = and Φ ( ) = δ( ), the process Ẇ (t, ) in (.6) is a space-time white noise. By adapting the technique used in [5, Chap. XVI, Sec. 3] we obtain the following lemma. Lemma.. If Φ is Hilbert-Schmidt from L into L, then Z(t) = S(t s)dw (s) belongs to L ([, T ], H (R)) C([, T ], L (R)) almost surely. If Φ is Hilbert- Schmidt from L into H, then Z(t) belongs to L ([, T ], H (R)) C([, T ], H (R)) almost surely. Proof. See Appendi A. 3. Large deviation principle. In this section we state a large deviation principle (LDP) for the solution (u ε (t, )) t [,T ], R of the randomly perturbed scalar conservation law (.6). It generalizes the classical Freidlin-Wentzell principle for finitedimensional diffusions. Throughout this paper, we assume that the flu F is a C function with bounded first and second derivatives. Assumption 3.. In this paper, the flu F is a C function and there eists C F < such that F L (R) and F L (R) are bounded by C F. Remark. Assumption 3. is merely a technical assumption to simplify the proofs of Proposition 3. and 3.3 and obviously it violates the conveity of flues in conservation laws. From the physical point of view, for a general flu F, we can choose a very large constant M and let F M (u) = F (u) for u M and saturate F M (u) for u > M. If [ M, M] can cover the range of interest of u, then F M (u) = F (u). Without a proof, we point out that this physical argument can be proven mathematically: if u ε t, u ε and u ε in (.6) are continuous on [, T ] R and Ẇ is bounded on [, T ] R, then the parabolic maimum principle implies that u ε is bounded on [, T ] R and thus we can find M <. The technical difficulty is that W is not differentiable in time so we can not apply the parabolic maimum principle directly. However, we can consider a modified ũ ε by replacing W by a smooth W in (.6) and show that ũ ε (t) L (R) u ε (t) L (R) as W W. We first describe the functional space to which the solution of the perturbed conservation law belongs. Proposition 3.. If Φ is Hilbert-Schmidt from L to H, then for any ε > there is a unique solution u ε to (.8) in the space E almost surely, where E = { (u(t, )) t [,T ], R : (u(t, ) U()) t [,T ], R C([, T ], H (R)) }.

5 Proof. See Appendi B. The rate function of the LDP is defined in terms of the mild solution of the control problem u t + (F (u)) = (Du ) + Φh(t, ), R, t [, T ], (3.) u(t =, ) = U(), R, (3.) where h L ([, T ], L (R)). The solution u to this problem is denoted by H[h] and H is a mapping from L ([, T ], L (R)) to E provided Φ is Hilbert-Schmidt from L to H. The following proposition is an etension of the LDP proved in []. Proposition 3.3. If Φ is Hilbert-Schmidt from L to H, then the solutions u ε satisfy a large deviation principle in E with the good rate function T I(u) = inf h(t, ) Ldt, (3.3) h,u=h[h] with the convention inf =. Proof. See Appendi B.. In fact, I(u) < if and only if u is in the range of H. The LDP means that, for any A E, we have with J (Å) lim inf ε ε log P(u ε A) lim sup ε log P(u ε A) J (A), (3.4) ε J (A) = inf I(u). (3.5) u A Note that the interior Å and closure A are taken in the topology associated with E. Although the conveity of the rate function I is unknown, it is possible to show that I satisfies the maimum principle: if the set of the rare event A does not contain the eact solution U( γt), then inf u A I(u) attains its minimum at the boundary of A. Proposition 3.4. If U( γt) / A and u Å, then there eists a sequence {un } in E such that I(u n ) < I(u) and u n u in E as n. As a consequence, any u Å can not be a local minimizer of I. Proof. See Appendi B Wave displacement from an elementary point of view. In this paper we are interested in estimating the probability of large deviations from the deterministic path U( γt). In this section, we first study in a very elementary way how the center of the solution u ε at time T can deviate from its unperturbed value. The center of a function u E is defined as [u(t, ) U()]d C[u](t) =, (4.) u u + provided that the integral is well-defined. Proposition 4.. If the covariance function C is in L (R R), then the center of u ε is well-defined for any time t [, T ] almost surely and it is given by W (t, )d C[u ε ](t) = γt ε. (4.) u u + 5

6 6 It is a Gaussian process with mean γt and covariance Cov ( C[u ε ](t), C[u ε ](t ) ) C(, = ε )dd (u u + ) t t. (4.3) Proof. See Appendi C. In the absence of noise, the center of the solution increases linearly as γt. In the presence of noise we can characterize the probability of the rare event B = { u E, C[u](T ) γt + }, (4.4) where [, ). Proposition 4.. If the covariance function C is in L (R R), then ( P(u ε B) ep (u u + ) ) ε T. (4.5) C(, )dd The approimate equality means that lim ε ε log P(u ε B) = (u u + ) T C(, )dd. This proposition is a direct corollary of Proposition 4.. If = then U( γt) B and P(u ε B) = /. If > then the set B is indeed eceptional in that it corresponds to the event in which the center of the profile is anomalously ahead of its epected position. Note that the scaling /T in (4.5) corresponds to that of the eit problem of a Brownian particle. The LDP for u ε stated in Proposition 3.3 is not used here for two reasons:. It does not give a good result because the interior of B in E is empty (since we can construct a sequence of functions bounded in H that blows up in L ).. The distribution of the center C[u ε ](T ) is here eplicitly known for any ε. This is fortunate and it is not always true. When the rare event is more comple, than the LDP for u ε is useful as we will show in the net section. 5. Wave displacement from the large deviations point of view. 5.. The framework and result. In this section, we use the large deviation principle to compute the probability of the rare event that the perturbed traveling wave is the same profile but with the displacement at time T : A = {u E such that u(, ) = U(), u(t, ) = U( γt )}. (5.) The value of the rate function heavily depends on the kernel Φ and formally the simplest kernel we can have is the identity operator. Of course, the identity operator is not Hilbert-Schmidt, but with the given A, we can construct a Hilbert-Schmidt Φ such that Φ is approimately the identity in the region of interest. Assumption 5... We assume that the center of transition of U() in (5.) is and +γt.. The kernel Φ lc L has the following form: Φ lc L (, ) = σφ ( L ) l c φ ( l c ), (5.) where σ, L and l c are positive constants.

7 3. φ and φ are L H functions so that their Fourier transforms are welldefined in R and Φ is Hilbert-Schmidt from L to H (see Lemma 5.4). φ and φ are normalized so that φ () = ˆφ () =. 4. φ C is positive valued and ˆφ is nonzero. In addition, /φ () and / ˆφ (ξ) have at most polynomial growth at ±. 5. φ () is increasing on (, ), decreasing on (, ) and identically equal to on [, ]. By this setting, the support of the noise kernel Φ is roughly [ L, L ]. The following theorem is the main result of this section, and the proof will be given in the net two subsections. Theorem 5.. Let A be defined by (5.). There eists a constant C such that for all Φ lc L satisfying Assumption 5. with L = γt + + C, the quantity J (A) of the large deviation problem (3.) generated by Φ lc L has the following asymptotic (in the sense that l c ) scales: Θ( ), for small and any fied T, lc J (A) = inf I(u) = u A Θ( /T ), for small and T small, Θ( ), for large and any fied T, Θ( /T ), for large and T small, Θ(/T ), for T small and any fied. 7 (5.3) Here J (A) lc = Θ( ) and similar epressions mean that there are constants C and C with the units such that C and C are dimensionless and C J (A) C as l c. Proof. See Section 5. and 5.3. Remark. C = ma{c v, C w }, where C v and C w will be determined in Lemma 5.6 and 5., respectively. L = γt + + C means that the range of noise covers the region of interest of A so that the rare event A is possible and at the same time the range is not too large to cause the waste of energy; the small l c means that the noise in this region is weakly correlated and can be treated as the white noise. We note that there are two occurrences of T in J (A): in (3.3), we integrate the h(t, ) L from to T, and the rare event A is T -dependent. These two occurrences of T would make the T -dependence of J (A) nontrivial; if T is small, the T -dependence in (5.) is negligible and we can have the scale in T. Therefore, if the traveling wave is stationary (γ = ), then A has no T -dependence so the sharper bounds can be obtained. Corollary 5.3. If γ =, then (5.3) can be refined: Θ( /T ), for small, lc J (A) = inf I(u) = u A Θ( ), for large and any fied T, Θ( /T ), for large and T small, Θ(/T ), for any fied. (5.4) Proof. See Section 5. and 5.3. Remark. The case that both and T are large is not clear even if γ =. This is because J (A) = Θ( /T ) for T, and J (A) = Θ( /T ) for T ; when both and T are large, J (A) is the miture of Θ( /T ) and Θ( /T ) so it is difficult to estimate the bounds.

8 8 In the rest of this subsection we discuss two relevant issues about this framework. We first show that the given kernel Φ lc L in Assumption 5. is indeed Hilbert-Schmidt from L to H. Lemma 5.4. If φ and φ are both in L H k for k, then the kernel Φ lc L of the form (5.) is Hilbert-Schmidt from L to H k. Proof. See Appendi D.. The other issue is that A has no interior and therefore J (Å) =, which gives a trivial lower bound in the large deviation framework; to avoid this triviality, a rigorous way is to consider instead the (closed) rare event A δ with a small δ > : A δ = {u E such that u(, ) = U, u(t, ) U( γt ) H δ}. (5.5) By Proposition 3.3, we have J (Åδ) lim inf ε ε log P(u ε A δ ) lim sup ε log P(u ε A δ ) J (A δ ). (5.6) ε In addition, the following lemma shows that J (A δ ) converges to J (A) as δ. Lemma 5.5. By definition J (A δ ) is a decreasing function with δ and bounded from above by J (A). In addition, lim J (A δ) = J (A). δ Proof. See Appendi D.. By Lemma 5.5 and the fact that J (Åδ) J (A), J (A) J (Åδ) lim inf ε ε log P(u ε A δ ) lim sup ε log P(u ε A δ ) J (A δ ) δ J (A). ε Namely, for ε and δ small, we have P(u ε A δ ) ep ( ε ) J (A), and it thus suffices to consider the bounds for J (A). Remark. We can compare the results stated in Theorem 5., valid under Assumption 5., and the ones stated in Proposition 4.5. First we can check that A B. Second we can anticipate that in fact B is not much larger than A because it seems reasonable that the most likely paths that achieve C[u](T ) γt + should be of the stable form U( γt ) at time t = T. This conjecture can indeed be verified as we now show. Note that [ d C(, )dd = Φ(, )d]. If the kernel Φ lc L is of the form (5.), then we find C(, )dd = σ L π ˆφ ( l c L ξ) ˆφ (ξ) dξ.

9 9 By assuming l c L (and ˆφ () = ) as in Assumption 5., then this simplifies to C(, )dd σ L φ () d. Therefore, from Theorem 5. (in which L = γt + + C ), we get { C(, )dd Θ( ), for large and T small or fied, = Θ(), for and T small or fied, which gives with (4.5) P(u ε B) ep ( ε ) J (B) with Θ( ), for small and any fied T, Θ( /T ), for small and T small, J (B) = Θ( ), for large and any fied T, Θ( /T ), for large and T small, Θ(/T ), for T small and any fied, as in (5.3). We therefore recover the same asymptotic scales as in Theorem Proof of Theorem 5. for small. Here we prove the bounds for J (A) in (5.3) and (5.4) when is small and the bound Θ(/T ) for fied. We first consider the upper bounds. By definition, J (A) I(u) for any u A; therefore the idea to obtain the upper bound is to find good test functions u. For small, we use the linear shifted profile as the test function: v(t, ) = U( (t/t )(γt + )). If the kernel Φ lc L σ in the region of interest, then T J (A) = inf I(u) I(v) u A σ v t + F (v) Dv Ldt. (5.7) We show (5.7) rigorously by the following lemmas: Lemma 5.6. Given the traveling wave U() in (5.), there eists C v such that, uniformly in and T and for all t [, T ], [φ ( /L ) ]U ( (t/t )(γt + )) L. (5.8) where L = γt + + C v. Proof. See Appendi D.3. Lemma 5.7. Let v(t, ) = U( (t/t )(γt + )). Given L = + γt + C v in Lemma 5.6, Φ lc L satisfying Assumption 5., and h lc L defined by v t + F (v) Dv = Φ lc L h lc L, (5.9) then T h lc L (t, ) lc Ldt σ T (v t + F (v) Dv )/φ ( /L ) Ldt. (5.)

10 Proof. See Appendi D.4. Now we are ready to prove (5.7). By Lemma 5.7, we have I(v) T h lc L (t, ) lc Ldt T σ (v t + F (v) Dv )/φ ( /L ) L dt. Then the rest is to compute T (v t + F (v) Dv )/φ ( /L ) L dt. Note that ( vt + F (v) Dv ) (t, ) = T U ( (t/t )(γt + )). With L = γt + + C v given in Lemma 5.6, we have U ( (t/t )(γt + ))/φ ( /L ) L U ( (t/t )(γt + )) L +, and therefore T (v t + F (v) Dv )/φ ( /L ) L dt T = T T T U ( (t/t )(γt + ))/φ ( /L ) L dt ( U ( (t/t )(γt + )) L + ) dt = ( U L T + ). Therefore, we have the asymptotic upper bounds for J (A): J (A) I(v) lc ( U σ L T + ). (5.) Now we find the lower bounds for J (A). Let us first denote by the function identically equal to, assume that (Φ lc L ) T L, and find the general form of the lower bound. Proposition 5.8. Given (Φ lc L ) T L, for any u A, I(u) T (Φ lc L ) T L ( [U( ) U()]d). (5.) Proof. See Appendi D.5. Then we show that (Φ lc L ) T is indeed in L. Lemma 5.9. Let Φ lc L satisfy Assumption 5.. Then (Φ lc L ) T () is in L (R), and (Φ lc L ) T L σ L φ L as l c. Proof. See Appendi D.6. From Lemma 5.6, L = γt + + C v, where C v is uniform in and T. Then for l c small the general lower bound (5.) becomes: J (A) T (Φ lc L ) T L ( [U( ) U()]d) (5.3) l c σ φ L T ( + γt + C v ) ( [U( ) U()]d).

11 For small, U( ) = U() U () + O( ). Then J (A) lc σ φ L T ( + γt + C v ) ( [ U () + O( )]d) (5.4) = σ φ L T ( + γt + C v ) [ (u + u ) + O( 3 )]. To summarize this subsection, given A in (5.), we choose the kernel Φ lc L with L = γt + + C v, which is the support of the noise covering the region of interest of A, and then by (5.) and (5.4), J (A) has the following asymptotic (in the sense that l c ) bounds: Θ( ), for small and any fied T, J (A) lc = Θ( /T ), for small and T small, Θ(/T ), for T small and any fied, for nonzero γ, and for γ =. J (A) lc = { Θ( /T ), for small, Θ(/T ), for any fied Proof of Theorem 5. for large. Now we consider the bounds for J (A) in (5.3) and (5.4) when is large. We use the test function w(t, ) = ( t/t )U() + (t/t )U( γt ), (5.5) the linear interpolation of the two profiles, and show that J (A) = inf I(u) I(w) u A σ T w t + F (w) Dw Ldt. (5.6) We also show (5.6) by two similar technical lemmas. Lemma 5.. Given the traveling wave U() in (5.) and w(t, ) in (5.5), there eists C w such that, uniformly in and T and for all t [, T ], [φ ( /L ) ][U( γt ) U] L, (5.7) [φ ( /L ) ][F (w) Dw ] L, where L = γt + + C w. Proof. See Appendi D.7. Because w has the same property that w t + F (w) Dw C([, T ], S(R)), we have the following lemma by replacing v by w in Lemma 5.7. Lemma 5.. Let w in (5.5). Given L = + γt + C w in Lemma 5., Φ lc L satisfying Assumption 5., and h lc L defined by w t + F (w) Dw = Φ lc L h lc L, then I(w) T h lc L (t, ) lc Ldt T σ (w t + F (w) Dw )/φ ( /L ) L dt. The rest is to compute T (w t + F (w) Dw )/φ ( /L ) L dt. Note that w t + F (w) Dw = T [U( γt ) U()] + F (w) Dw.

12 With L = γt + + C w given in Lemma 5., T T (w t + F (w) Dw )/φ ( /L ) L dt + T + T T T T [U( γt ) U]/φ ( /L ) L dt [F (w) Dw ]/φ ( /L ) L dt ( [U( γt ) U] L + ) dt ( F (w) Dw L + ) dt. We note that for large, U( γt ) U L = Θ( ), and F (w) Dw L = O(). Then for L = γt + + C w given in Lemma 5., we have the upper bounds for J (A) for large: J (A) I(w) lc Θ( /T ) + Θ(T ). (5.8) Now we consider the lower bound. We know that (5.3) still holds for large and with C w given in Lemma 5.. For large, [U( ) U()]d = Θ( ) so J (A) lc σ φ L T ( + γt + C w ) Θ( ). (5.9) Consequently, by combining (5.8) and (5.9), we have the bounds in (5.3) and (5.4) for large: J (A) lc = { Θ( ), for large and any fied T, Θ( /T ), for large and T small, 6. Large deviations for discrete conservation laws. Conservation laws can only be solved numerically, in general, so we need to consider space-time discretizations. For the calculation of small probabilities of large deviations, we may pass directly to the calculation of the infimum of the rate function, which we know analytically. First we discretize it in space and time and then use a suitable optimization method to find the approimate minimizer and the approimate value of the rate function. This way of calculating probabilities of large deviations has been carried out previously [9] for different stochastically driven partial differential equations. More involved methods that use adaptive meshes are discussed in [6]. In the net subsection we show briefly that the rate function for large deviations of discrete conservation laws using Euler schemes is, as epected, the corresponding discretization of the continuum rate function. 6.. Large deviations with Euler schemes. To formulate the discrete problem, we discretize the space and time domains with uniform grids, with L = < < M = R, M = R L and = t < < t N = T, N t = T. Let Q n m denote the average of u over the m-th cell [ m, m ] (whose center is m / = ( m + m )/)

13 at time n t, and Q n denote the vector (Q n,..., Q n M )T. The Euler scheme for the conservation law is Q n+ m = Q n m t (F n m+ F n m )+D 3 t ( ) (Qn m+ Q n m +Q n m )+ε Wm, n (6.) for m =,..., M, n =,..., N, where F n m±/ (Qn ) are numerical flues constructed by standard finite volume methods such as Godunov or local-la-friedrichs (LLF), and possibly with higher order ENO (Essentially Non-Oscillatory) reconstructions. The initial conditions (Q m) m=,...,m are given by (6.4), the boundary conditions (Q n ) n=,...,n and (Q n M ) n=,...,n are given by (6.7), and the flues are given eplicitly in (6.6), in the net subsection, for Burgers equation. We let where b(q n ) = (b (Q n ),..., b M (Q n )) T, b m (Q n ) = (F n m+ F n m ) + D ( ) (Qn m+ Q n m + Q n m ). Let ( Wm) n m=,...,m,n=,...,n be Gaussian random variables with mean and covariance { t E( Wm n Wm n ) = C m m, n = n,, otherwise. The matri C = (C ij ) i,j=,...,m is symmetric and non-negative definite. For simplicity, we assume that C is positive definite, and then C = ΦΦ T for an invertible matri Φ. By the Markov property, the joint density function f (Q,...,Q N ) is where f (Q,...,Q N )(q,..., q N ) = N n= f Q n+ Q n(qn+ ; q n ), f Q n+ Q n(qn+ ; q n ) [ = Z ep t ( ) T ( ) ] ε t (qn+ q n ) b(q n ) C t (qn+ q n ) b(q n ) [ = Z ep t ( ) ] ε Φ t (qn+ q n ) b(q n ), where Z = (πε t/ ) M det C and the l -norm is here q = M The eact probability that Q = (Q,..., Q N ) A with the given Q is therefore P(Q A) = A [ Z N ep t ε N n= j= q j. ( ) ] Φ t (qn+ q n ) b(q n ) dq dq N.

14 4 Because the problem is finite-dimensional, the LDP is a form of Laplace s method for the asymptotic evaluation of integrals [4] and the rate function for q = (q,..., q N ) is I(q) = t N n= ( ) Φ t (qn+ q n ) b(q n ). (6.) From this epression it is clear that the rate function of the continuous conservation law is the limit of (6.) as t,. However, the LDP of the discrete conservation law does not immediately imply that of the continuous case. The limit of ε and the limit of t, need to be interchangeable. In the language of large deviations, the law of the discrete conservation law has to be eponentially equivalent to that of the continuous one [7]. Without going into the proof of this, we can only say that (6.) is a discretization of the rate function of the continuous conservation law. 6.. Numerical calculation of rate functions for changes in traveling waves. In the numerical simulations we consider the rare event that a traveling wave at time becomes a different traveling wave at time T due to the small random perturbations. We carry out numerical calculations with Burgers equation as a simple but representative conservation law. For convenience, given a traveling wave U with the speed γ, we consider Burgers equation in moving coordinates: ( u t + (u γ)) = (Du ) + εẇ. (6.3) Thus U is a traveling wave of (6.3) with speed zero. We are interested in the rare event that u(, ) = U () and u(t, ) = U T () because of εẇ, where U and U T are two different traveling waves. We therefore minimize the rate function (6.) to compute the asymptotic probability. The initial conditions and the terminal conditions q m = U ( m / ) for all m =,..., M (6.4) q N m = U T ( m / ) for all m =,..., M (6.5) are simply the values of the functions at the centers of the cells. For m =,..., M we let F n m±/ be the numerical flues for (u γ) / at m±/. We use Godunov s method [7] to construct F n m±/ : { Fm / n = minq n m q qm n (q γ), qm n qm, n ma q n m q qm n (q γ), qm n qm. n (6.6) When the final traveling wave solution U T is the shifted initial profile U ( ) for some, then we impose the boundary conditions q n = u and q n M = u + for all n =,..., N, (6.7) where u ± = lim ± U (). We will discuss the boundary conditions imposed in the other cases later on.

15 The covariance matri of the random coefficients ( W n i ) i=,...,m is set to be equal to C ij = δ ij and thus Φ = I M M. In other words, ( W n i ) i=,...,m are i.i.d. Gaussian random variables. Since the discrete problem is finite dimensional, Φ is clearly a Hilbert-Schmidt matri. The objective is to minimize the rate function (6.). Because the initial, terminal and boundary conditions are easily integrated into the definition of the discrete rate function, we can minimize I(q) by unconstrained optimization methods. The BFGS quasi-newton method [9] is our optimization algorithm. An important issue in numerical optimization is that any gradient-based method, for eample the BFGS method that we use, only gives a local optimum unless the objective function is conve. In our case, it is not clear that the discrete rate function (6.) is conve or not. However, based on our numerical simulations we note the following.. Our numerical results of the optimal shifted profiles coincide with the analytical predictions (5.4).. Instead of using a good initial guess for the minimizer in the numerical optimization, we have also numerically verified that completely random initial guesses give essentially the same result. 3. We have checked numerically to see if the rate function (6.) is conve. We randomly pick two close by test paths to see if the midpoint conveity of (6.) is satisfied. We find that in 99.98% out of 6 pairs the discrete rate function passes the conveity test. It is not known what causes the.% failures. We conclude that based on numerical calculations (6.) is essentially conve, if it is not fully conve. This eplains the observed robustness of the numerical optimization The numerical setup. We use different U and U T to calculate probabilities of several rare events. Our main interest is the anomalous wave displacement, which is theoretically analyzed in the previous sections. The other cases are the wave speed change, the transition from a strong shock to a weak shock, and the transition from a strong shock to a weak shock. We have not carried out an analysis of the last three cases. However, the probabilities of these rare events can be calculated numerically and show how unlikely such events are compared to anomalous wave displacement. In each configuration we consider the high viscosity case (D = ) and the low viscosity case (D =.). T =, =. and t =. in all simulations and the linear interpolation of U and U T is the initial guess in the numerical optimization. As we noted before, a random initial guess gives essentially the same result, but we use the linear interpolation to speed up the optimization Anomalous wave displacements. In this case, we let U be the traveling wave solution for Burgers equation (6.3), and U T = U ( ) represent a shifted traveling wave. This is the discrete version of (5.), and we find that the numerical results are consistent with the analytical result (5.4). We first consider Fig.6. and Fig.6. with D =. The optimal path is close to the linear shift when is small and it looks like the linear interpolation when is large. This also motivates us to choose the test functions v and w for the upper bounds of J (A) in Section 5. and 5.3. Further, Fig.6. shows that the optimal I is quadratic near = and is linear for large, and is of order /T in T. These observations confirms the analysis (5.4). 5

16 6 I =.9468, D =, T =, u =, u + =, = I = 3.669, D =, T =, u =, u + =, = U U T.6.5 U U T I = 5.575, D =, T =, u =, u + =, = I = 8., D =, T =, u =, u + =, = U U T.6.5 U U T Fig. 6.. The optimal paths and their values of I in (6.) in the case that U T () = U ( ) for different. In each figure, we plot the curves indicating the optimal path at time, t, t,..., T =. We can see that for small, the optimal path is close to the linear shift traveling wave and for large, the optimal path looks like the linear interpolation of U and U T. These observations are consistent with the test function v and w we use for the upper bounds in Section 5. and 5.3. We consider net Fig.6.3 and Fig.6.4 with D =.. As the transition regions are very narrow and separate very quickly in the low viscosity case, the optimal path is nearly the linear interpolation, and therefore the I versus plot is almost linear ecept around =. Moreover, the optimal rate function is still of order /T in T, which is also seen in the analysis (5.4) Change of wave speeds. In this subsection we consider the rare event in which u(, ) = U () and u(t, ) = U T () := U () because of εẇ, where U () is a traveling wave with u ± := lim ± U () which are different from u ± = F (u+) F (u ) u + u F (u+) F (u ) u + u. lim ± U (), and such that γ = is different from γ = This case does not belong to the class of problems addressed in the previous sections of this paper, in which the boundary conditions are fied. But it is still possible to look for the optimal paths going from U to U T and that minimize the rate function I. The boundary conditions are more delicate to implement in this case. We know that U and U T are very close to constants when L and R are far from their transition regions. In this case, the LDP implies that the optimal path around the boundaries should be very close to the linear interpolations of U and U T in time. Therefore we let: q n = ( n N )q + n N qn, q n M = ( n N )q M + n N qn M.

17 7 9 Wave Shift, D =, T =, u =, u + =.4 Wave Shift, D =, =, u =, u + = I 5 4 /I T Fig. 6.. Left: The optimal values of I in (6.) versus in the case that U T () = U ( ) for =,,,..., with the same setting in Fig.6.. We see that the curve is quadratic for small and linear for large (see (5.4)). Right: The reciprocal of the optimal values of I in (6.) versus T in the case that U T () = U ( ) for = and T =.,.,..., with the same setting in Fig.6.. We see that the curve is linear in T, which means the optimal I is of Θ(/T ) (see also (5.4)). It is possible not to set the boundary conditions and to optimize the boundary cells as well. Our numerical simulations show, however, that the solution in both cases is basically the same ecept for some oscillations near the boundaries. The oscillations come from the inappropriate discretization at the boundaries, and this is a limitation of the numerical discretization. We can have a few etra boundary conditions to reduce the unwanted oscillations at the boundaries. For eample, we can additionally set q n = ( n N )q + n N qn, q n M = ( n N )q M + n N qn M. The results are shown in Fig.6.5. We let γ = and γ = 3.5 while we keep u u + = u u + = to indicate that U and U T have roughly the same transition magnitude. We see that the values of the rate function are much larger than that of the anomalous wave displacements. This means that it is very unlikely to have changes of wave speeds compared to wave displacements Weak shocks to strong shocks and strong shocks to weak shocks. We also consider the case that U is a weak (strong) shock while U T is a strong (weak) shock. By a strong shock we mean that the difference between u and u + is large. This case is also not in the range of our analytical framework, but we can still compute the rate function after we impose the suitable boundary conditions. We use the same boundary conditions as the ones in the previous subsection: q n = ( n N )q + n N qn, q n M = ( n N )q M + n N qn M, q n = ( n N )q + n N qn, q n M = ( n N )q M + n N qn M. From Fig.6.6 and Fig.6.7 we see that the optimal path of weak to strong and the one of strong to weak are significantly different, even if the reference strong and weak shocks are fied. We note the very large value of the rate function compared to anomalous displacement, and how it depends on D. This confirms quantitatively the epectation that shock profiles are very stable and they are not easily perturbed ecept for displacements.

18 8 I =.4356, D =., T =, u =, u + =, = I = , D =., T =, u =, u + =, = U U T.6.5 U U T I = , D =., T =, u =, u + =, = I = , D =., T =, u =, u + =, =.6.5 U U T.6.5 U U T Fig The optimal paths and their values of I in (6.) in the case that U T () = U ( ) for different. In each figure, we plot the curves indicating the optimal path at time, t, t,..., T =. Different from Fig.6., as the transition region is very small, we see nearly no shifted part but only the linear interpolation. 7. Direct numerical simulations with importance sampling. The large deviations probabilities calculated in Sections 5 and 6 are only the eponential decay rates of the probabilities but not the actual probabilities. In this section we use Monte Carlo methods to compute the actual probabilities numerically. 7.. Burgers equation with spatially correlated random perturbations. We reformulate the discretized problem for Burgers equation when we have spatially correlated random perturbations. Given a traveling wave solution U ( γt) of Burgers equation with lim ± U () = u ±, we transform to moving coordinates u t + ( (u γ)) = (Du ) + εẇ. (7.) Then U () is a stationary traveling wave of (7.). The rare event we consider is A δ = {u E such that u(, ) = U, u(t, ) U ( ) L δ}. Although for a discrete conservation law it is also possible to consider the other cases in Section 6, their probabilities are too small to compute by the basic Monte Carlo method so we omit them. To compute P(u A δ ) numerically, we discretize the space and time domains uniformly as in Section 6.: L = < < M = R, M = R L and = t < < t N = T, N t = T. Here Q n m denotes the average of u over the m-th cell at

19 9 Wave Shift, D =., T =, u =, u + =. Wave Shift, D =., =, u =, u + = I 5 /I T Fig Left: The optimal values of I in (6.) versus in the case that U T () = U ( ) for =,,,..., with the same setting in Fig.6.3. As what we see in Fig.6.3, the optimal path is the linear interpolation and thus the curves is almost linear ecept a small perturbation around =. Right: The reciprocal of the optimal values of I in (6.) versus T in the case that U T () = U ( ) for T =.,.,..., with the same setting in Fig.6.3. We see that the curve is linear in T, which means the optimal I is O(/T ). 4 I=4.466, D= 4 I=43.5, D=. 3.5 U T 3.5 U T U.5 U Fig The optimal paths and their values of I in (6.) in the case that U and U T have different speeds. Here we let γ =.5 in Burgers equation (6.3), and therefore the speed of U is zero and the speed of U T is 3.5. The differences between the left and right boundary values are kept the same so that U and U T have the same transition magnitude. In each figure, we plot the curves indicating the optimal path at time, t, t,..., T =. time n t, and evolves by the Euler method: Q n+ m = Q n m t (F n m+ F n m )+D t ( ) (Qn m+ Q n m +Q n m )+ε Wm, n (7.) for m =,..., M, n =,..., N, where Fm±/ n (Qn ) are numerical flues of (u γ) / constructed by Godunov s method and ( Wm) n m=,...,m,n=,...,n are Gaussian random variables with mean and covariance E( W n m W n m ) = { t C m m, n = n,, otherwise. In order to make the problem more realistic, we assume that the variables Wm n are spatially correlated: C mm = σ ep( l c m m ) and C = (C ij ) M i,j= = ΦΦT.

20 I=54.434, D= I=56.657, D= U T 8 7 U T U U 5 5 Fig The optimal paths and their values of I in (6.) in the case that U is a weak shock while U T is a strong shock. Here we let U has a small difference in the boundary values while U T has a large one. In each figure, we plot the curves indicating the optimal path at time, t, t,..., T =. I=5.3, D= I=53.34, D= U 8 7 U U T U T Fig The optimal paths and their values of I in (6.) in the case that U is a strong shock while U T is a weak shock. Here we let U has a large difference in the boundary values while U T has a small one. In each figure, we plot the curves indicating the optimal path at time, t, t,..., T =. Finally we impose the initial and boundary conditions: Q m = U ( m / ), Q n = u and Q n M = u Introduction to importance sampling. To estimate P(u A δ ), we may use the basic Monte Carlo method. The Monte Carlo strategy is as follows. We generate K independent samples W (k) = ( Wm n,(k) ) m=,...,m,n=,...,n, k =,..., K, of the Gaussian vector ( Wm) n m=,...,m,n=,...,n, which give K independent samples Q (k) = (Q n,(k) m ) m=,...,m,n=,...,n, k =,..., K of the random vector Q = (Q n m) m=,...,m,n=,...,n. The basic Monte Carlo estimator is where ˆP MC = K Aδ (Q (k) ) (7.3) K k= M Q A δ if and only if [Q N m U ( m )] δ. (7.4) m=

21 In other words, ˆP MC is the empirical frequency that Q (k) A δ. It is an unbiased estimator E[ ˆP MC ] = P(Q A δ ). By the law of large numbers, it is strongly convergent ˆP MC P(Q A δ ) almost surely as K. Its variance is given by Var( ˆP MC ) = K Var( A δ (Q)) = K ( P(Q Aδ ) P(Q A δ ) ). In order to have a meaningful estimation, the standard deviation of the estimator and P(Q A δ ) should be of the same order. Namely, the relative error Var ( ˆP MC ) P(Q A δ ) = ( K P(Q A δ ) should be of order one (or smaller). This means that the number K of Monte Carlo samples should be at least of the order of the reciprocal of the probability P(Q A δ ). We note that for ε small, P(Q A δ ) decreases eponentially and so K should be increased eponentially; the eponential growth of K makes the basic Monte Carlo method computationally infeasible. The well established way to overcome the difficulty of calculating rare event probabilities is to use importance sampling. The problem with basic Monte Carlo is that for small ε there are very few samples in A δ under the original measure P so the estimator is inaccurate. In importance sampling we change the original measure so that there is a significant fraction of Q k in A δ under this new measure Q, even for small ε. Since we use the biased measure Q to generate Q (k), it is necessary to weight the simulation outputs in order to get an unbiased estimator of P(Q A δ ). The correct weight is the likelihood ratio since we have: ) P(Q A δ ) = E P [ Aδ (Q)] = E Q [ Aδ (Q) dp dq (Q)]. Then the importance sampling estimator is ˆP IS = K K k= Aδ (Q (k) ) dp dq (Q(k) ), (7.5) where Q (k) are generated under Q. The estimator ˆP IS is unbiased E Q [ ˆP IS ] = P(Q A δ ) and its variance is Var Q ( ˆP IS ) = K Var ( Q Aδ (Q) dp dq (Q)) The main issue in importance sampling is how to choose a good Q to have a low Var Q [ Aδ (Q) dp dq (Q)]. In many cases (see for eample, [,, 3, ]), it can be shown that the change of measure suggested by the most probable path of the LDP is asymptotically optimal as ε. However, it is also well-known that in some cases (see []), the estimator by this strategy is worse than the basic Monte Carlo, and may even have infinite variance. However, because the rare event A δ is conve and the discrete rate function I is (numerically tested) essentially conve, the importance sampling estimator by this strategy is epected to be asymptotically optimal and we will see that it indeed works very well.

22 7.3. Importance sampling based on the most probable path. In this subsection we implement the importance sampling by using a biased distribution centered on the most probable path obtained in Section 6. From Section 6., Q := arg inf Q Aδ I(Q) is the most probable path as ε by the large deviation principle. We choose h = ( h n ) n=,...,n = ( h n m) m=,...,m,n=,...,n such that Q n+ m = Q n m t (F m+ ( Q n ) F m ( Q n )) (7.6) + D t ( ) ( Q n m+ Q n m + Q n m ) + (Φ h n ) m, for m =,..., M, n =,..., N. Assume that under the probability Q, the vector W n := ( Wm) n m=,...,m in (7.) is multivariate Gaussian N (ε Φ h n, t n C) and Cov( Wm, Wm n ) = if n n. Denoting W = ( W n ) n=,...,n, the likelihood ratio dp/dq can be computed eplicitly: ( dp ( W ) = ep dq ) N [ Φ W n Φ W n ε hn. (7.7) n= Note that under Q, (7.) can be written as Q n+ m = Q n m t (F n m+ F n m )+D t ( ) (Qn m+ Q n m+q n m )+(Φ h n ) m +ε W m, n (7.8) where W = ( W m) n m=,...,m,n=,...,n is zero-mean, Gaussian with the spatial covariance t C, and white in time. Then (7.7) can be written as ( ) dp dq ( W ) = ep N Φ W n + ε hn Φ W n. (7.9) n= In summary, the importance sampling Monte Carlo strategy is implemented as follows:. Compute the optimal path Q = arg inf Q Aδ I(Q) and its residual h from (7.6).. Sample K independent W (k) with the zero-mean, Gaussian distribution with the covariance t C in space and white in time. Compute the corresponding Q (k) by (7.8). 3. The importance sampling estimator is ˆP IS = K K k= where dp dq ( W ) is defined in (7.9). Aδ (Q (k) ) dp dq ( W (k) ), 7.4. Simulations with importance sampling. We consider three estimators: the basic Monte estimator ˆP MC IS and two importance sampling estimators: ˆP and ˆP δ IS IS, where ˆP uses h in (7.6) with Q = arg inf Q A I(Q n IS m) while ˆP δ uses h in (7.6) with Q = arg inf Q Aδ I(Q). The parameters for the simulations are listed in Table 7..

23 3 =.5 t =.5 L = 5 R = T = u = u + = γ =.5 D = = 5 δ =.5 σ = l c = 5 K = 4 Table 7. The parameters for the Monte Carlo simulations. inf u A I(u) =.934 inf u Aδ I(u) = U U T.6.4 U U T Fig. 7.. The optimal paths for inf Q A I(Q) and inf Q Aδ I(Q). With the spatially correlated noise, inf Q A I(Q) is much lower than the one with the spatially white noise (see Fig.6.). As we mention before, we only test the wave displacement with = 5 and D = because the probabilities of the other cases in Section 6 are too small for ˆP MC to have meaningful samples. First we find the optimal paths for inf Q A I(Q) and inf Q Aδ I(Q). As before, inf Q A I(Q) can be modeled as an unconstrained optimization problem and we solve it by the BFGS method while we solve inf Q Aδ I(Q) by sequential quadratic programming (SQP) [9]. From Fig.7. we note that inf Q A I(Q) is much smaller than the corresponding one with spatially white noise. This is because with the correlated noise, it is easier to have simultaneous increments. In addition, inf Q Aδ I(Q) is significantly different from inf Q A I(Q) because δ =.5 is not very small. we will see that this difference IS IS significantly affects the performances of ˆP and ˆP δ. IS Once arg inf Q A I(Q) and arg inf Q Aδ I(Q) are obtained, we can construct ˆP IS and ˆP δ. We estimate P(Q A δ ) by these three estimators for different ε taken uniformly in [.,.]. For each ε and estimator, we use K = 4 samples. The (numerical) 99% confidence intervals are defined by [ ] Mean K.6 Std K, Mean K +.6 Std K where the (numerical) mean and standard deviation are Mean K = K K p (k), k= Std K = K K (p (k) ) Mean K, k= with p (k) = Aδ (Q (k) ) for ˆP MC and p (k) = dp/dq( W (k) ) Aδ (Q (k) ) for ˆP IS. We IS IS find that ˆP δ has the best performance and ˆP also has the good performance for IS. ε.. For ɛ <., because arg inf Q A I(Q) is not the optimal path, ˆP is even worse than ˆP MC due to the inappropriate change of measure. For ɛ <., because arg inf Q Aδ I(Q) is the optimal path, ˆP IS δ dramatically outperforms ˆP MC. This shows that large-deviations-driven importance sampling strategies can be efficient to estimate rare event probabilities in the contet of perturbed scalar conservation laws.

24 4 3 Basic Monte Carlo 3 IS by arg infu A I(u) 3 IS by arg infu Aδ I(u) P(Aδ) 6 4 P(Aδ) 6 4 P(Aδ) 6 4 ε ε ε Fig. 7.. The estimated probabilities and their 99% confidence intervals by ˆP MC, ˆP IS and ˆP δ IS. ˆP IS δ has the best performance and ˆP IS also has the good performance for. ε.. However ˆP IS works poorly for ɛ <...5 Basic Monte Carlo.5 IS by arg infu A I(u).5 IS by arg infu Aδ I(u).5 U UT.5 U UT.5 U UT Fig Sample paths under these three probability measures with ε =.. We plot the estimated probabilities and the relative error, the ratio of the (numerical) standard deviation to the (numerical) mean, in the log scale. Note that in the etreme case, the estimated probability is dominated by the value p (k) of one realization over K (p (k) = for ˆP MC and p (k) = dp/dq( W (k) ) for ˆP IS ), and the numerical variance is approimately (p (k) ) /K. Therefore the relative error is about K =. This is why the curves of the relative errors in Fig.7.4 saturate at, and it also tells us that when the relative error reaches K, the estimator is in the etreme case and so is not reliable. 8. Conclusion. We have analyzed here the small probabilities of anomalous shock profile displacements due to random perturbations using the theory of large deviations. We have obtained analytically upper and lower bounds for the eponential rate of decay of these probabilities and we have verified the accuracy of these bounds with numerical simulations. We have also used Monte Carlo simulations with importance sampling based on the analytically known rate function, which is efficient and gives very good results for rare event probabilities. Acknowledgment. This work is partly supported by the Department of Energy [National Nuclear Security Administration] under Award Number NA864, and partly by AFOSR grant FA Appendi A. Proof of Lemma.. Taking a Fourier transform in we have Ẑ(t, ξ) = e Dξ (t s) dŵ (s, ξ), where Ŵ (t, ξ) = W (t, )e iξ d is a comple Gaussian process with mean zero and covariance E [ Ŵ (t, ξ)ŵ (t, ξ ) ] = t t Ĉ(ξ, ξ ),

Uncertainty quantification and systemic risk

Uncertainty quantification and systemic risk Uncertainty quantification and systemic risk Josselin Garnier (Université Paris Diderot) with George Papanicolaou and Tzu-Wei Yang (Stanford University) February 3, 2016 Modeling systemic risk We consider

More information

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws

A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws A high order adaptive finite element method for solving nonlinear hyperbolic conservation laws Zhengfu Xu, Jinchao Xu and Chi-Wang Shu 0th April 010 Abstract In this note, we apply the h-adaptive streamline

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Hyperbolic Systems of Conservation Laws. I - Basic Concepts

Hyperbolic Systems of Conservation Laws. I - Basic Concepts Hyperbolic Systems of Conservation Laws I - Basic Concepts Alberto Bressan Mathematics Department, Penn State University Alberto Bressan (Penn State) Hyperbolic Systems of Conservation Laws 1 / 27 The

More information

Likelihood Bounds for Constrained Estimation with Uncertainty

Likelihood Bounds for Constrained Estimation with Uncertainty Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty

More information

A stochastic particle system for the Burgers equation.

A stochastic particle system for the Burgers equation. A stochastic particle system for the Burgers equation. Alexei Novikov Department of Mathematics Penn State University with Gautam Iyer (Carnegie Mellon) supported by NSF Burgers equation t u t + u x u

More information

Uncertainty quantification and systemic risk

Uncertainty quantification and systemic risk Uncertainty quantification and systemic risk Josselin Garnier (Université Paris 7) with George Papanicolaou and Tzu-Wei Yang (Stanford University) April 9, 2013 Modeling Systemic Risk We consider evolving

More information

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer. Problem Sheet,. i) Draw the graphs for [] and {}. ii) Show that for α R, α+ α [t] dt = α and α+ α {t} dt =. Hint Split these integrals at the integer which must lie in any interval of length, such as [α,

More information

M.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011

M.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011 .S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

Differential Analysis II Lectures 22-24

Differential Analysis II Lectures 22-24 Lecture : 3 April 015 18.156 Differential Analysis II Lectures - Instructor: Larry Guth Trans.: Kevin Sacel.1 Precursor to Schrödinger Equation Suppose we are trying to find u(, t) with u(, 0) = f() satisfying

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Journal of Statistical Physics, Vol. 122, No. 2, January 2006 ( C 2006 ) DOI: 10.1007/s10955-005-8006-x Scaling Limits of Waves in Convex Scalar Conservation Laws Under Random Initial Perturbations Jan

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

SELECTED TOPICS CLASS FINAL REPORT

SELECTED TOPICS CLASS FINAL REPORT SELECTED TOPICS CLASS FINAL REPORT WENJU ZHAO In this report, I have tested the stochastic Navier-Stokes equations with different kind of noise, when i am doing this project, I have encountered several

More information

Homogenization and error estimates of free boundary velocities in periodic media

Homogenization and error estimates of free boundary velocities in periodic media Homogenization and error estimates of free boundary velocities in periodic media Inwon C. Kim October 7, 2011 Abstract In this note I describe a recent result ([14]-[15]) on homogenization and error estimates

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

Computation of entropic measure-valued solutions for Euler equations

Computation of entropic measure-valued solutions for Euler equations Computation of entropic measure-valued solutions for Euler equations Eitan Tadmor Center for Scientific Computation and Mathematical Modeling (CSCAMM) Department of Mathematics, Institute for Physical

More information

Bregman Divergence and Mirror Descent

Bregman Divergence and Mirror Descent Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,

More information

Taylor Series and Asymptotic Expansions

Taylor Series and Asymptotic Expansions Taylor Series and Asymptotic Epansions The importance of power series as a convenient representation, as an approimation tool, as a tool for solving differential equations and so on, is pretty obvious.

More information

Rates of Convergence to Self-Similar Solutions of Burgers Equation

Rates of Convergence to Self-Similar Solutions of Burgers Equation Rates of Convergence to Self-Similar Solutions of Burgers Equation by Joel Miller Andrew Bernoff, Advisor Advisor: Committee Member: May 2 Department of Mathematics Abstract Rates of Convergence to Self-Similar

More information

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Benjamin Peherstorfer a,, Boris Kramer a, Karen Willcox a a Department of Aeronautics

More information

Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations.

Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations. Hypoelliptic multiscale Langevin diffusions and Slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint works with Michael Salins 2 and Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

18.303: Introduction to Green s functions and operator inverses

18.303: Introduction to Green s functions and operator inverses 8.33: Introduction to Green s functions and operator inverses S. G. Johnson October 9, 2 Abstract In analogy with the inverse A of a matri A, we try to construct an analogous inverse  of differential

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Hyperbolic Systems of Conservation Laws. in One Space Dimension. II - Solutions to the Cauchy problem. Alberto Bressan

Hyperbolic Systems of Conservation Laws. in One Space Dimension. II - Solutions to the Cauchy problem. Alberto Bressan Hyperbolic Systems of Conservation Laws in One Space Dimension II - Solutions to the Cauchy problem Alberto Bressan Department of Mathematics, Penn State University http://www.math.psu.edu/bressan/ 1 Global

More information

Where now? Machine Learning and Bayesian Inference

Where now? Machine Learning and Bayesian Inference Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Vibrating Strings and Heat Flow

Vibrating Strings and Heat Flow Vibrating Strings and Heat Flow Consider an infinite vibrating string Assume that the -ais is the equilibrium position of the string and that the tension in the string at rest in equilibrium is τ Let u(,

More information

Chapter 3 Burgers Equation

Chapter 3 Burgers Equation Chapter 3 Burgers Equation One of the major challenges in the field of comple systems is a thorough understanding of the phenomenon of turbulence. Direct numerical simulations (DNS) have substantially

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Systemic risk and uncertainty quantification

Systemic risk and uncertainty quantification Systemic risk and uncertainty quantification Josselin Garnier (Université Paris 7) with George Papanicolaou and Tzu-Wei Yang (Stanford University) October 6, 22 Modeling Systemic Risk We consider evolving

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Regularization by noise in infinite dimensions

Regularization by noise in infinite dimensions Regularization by noise in infinite dimensions Franco Flandoli, University of Pisa King s College 2017 Franco Flandoli, University of Pisa () Regularization by noise King s College 2017 1 / 33 Plan of

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

Applications of controlled paths

Applications of controlled paths Applications of controlled paths Massimiliano Gubinelli CEREMADE Université Paris Dauphine OxPDE conference. Oxford. September 1th 212 ( 1 / 16 ) Outline I will exhibith various applications of the idea

More information

2 A Model, Harmonic Map, Problem

2 A Model, Harmonic Map, Problem ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or

More information

Measure-Transformed Quasi Maximum Likelihood Estimation

Measure-Transformed Quasi Maximum Likelihood Estimation Measure-Transformed Quasi Maximum Likelihood Estimation 1 Koby Todros and Alfred O. Hero Abstract In this paper, we consider the problem of estimating a deterministic vector parameter when the likelihood

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

On compact operators

On compact operators On compact operators Alen Aleanderian * Abstract In this basic note, we consider some basic properties of compact operators. We also consider the spectrum of compact operators on Hilbert spaces. A basic

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Approximate inference, Sampling & Variational inference Fall Cours 9 November 25

Approximate inference, Sampling & Variational inference Fall Cours 9 November 25 Approimate inference, Sampling & Variational inference Fall 2015 Cours 9 November 25 Enseignant: Guillaume Obozinski Scribe: Basile Clément, Nathan de Lara 9.1 Approimate inference with MCMC 9.1.1 Gibbs

More information

Sharp Sobolev Strichartz estimates for the free Schrödinger propagator

Sharp Sobolev Strichartz estimates for the free Schrödinger propagator Sharp Sobolev Strichartz estimates for the free Schrödinger propagator Neal Bez, Chris Jeavons and Nikolaos Pattakos Abstract. We consider gaussian extremisability of sharp linear Sobolev Strichartz estimates

More information

On the stochastic nonlinear Schrödinger equation

On the stochastic nonlinear Schrödinger equation On the stochastic nonlinear Schrödinger equation Annie Millet collaboration with Z. Brzezniak SAMM, Paris 1 and PMA Workshop Women in Applied Mathematics, Heraklion - May 3 211 Outline 1 The NL Shrödinger

More information

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013 Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description

More information

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD

More information

Received 6 August 2005; Accepted (in revised version) 22 September 2005

Received 6 August 2005; Accepted (in revised version) 22 September 2005 COMMUNICATIONS IN COMPUTATIONAL PHYSICS Vol., No., pp. -34 Commun. Comput. Phys. February 6 A New Approach of High OrderWell-Balanced Finite Volume WENO Schemes and Discontinuous Galerkin Methods for a

More information

The incomplete gamma functions. Notes by G.J.O. Jameson. These notes incorporate the Math. Gazette article [Jam1], with some extra material.

The incomplete gamma functions. Notes by G.J.O. Jameson. These notes incorporate the Math. Gazette article [Jam1], with some extra material. The incomplete gamma functions Notes by G.J.O. Jameson These notes incorporate the Math. Gazette article [Jam], with some etra material. Definitions and elementary properties functions: Recall the integral

More information

Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators

Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators Sergey V. Lototsky and Boris L. Rosovskii In: V. Korolyuk, N. Portenko, and H. Syta (editors), Skorokhod s Ideas in

More information

From Fractional Brownian Motion to Multifractional Brownian Motion

From Fractional Brownian Motion to Multifractional Brownian Motion From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December

More information

arxiv:gr-qc/ v1 6 Sep 2006

arxiv:gr-qc/ v1 6 Sep 2006 Introduction to spectral methods Philippe Grandclément Laboratoire Univers et ses Théories, Observatoire de Paris, 5 place J. Janssen, 995 Meudon Cede, France This proceeding is intended to be a first

More information

Computation fundamentals of discrete GMRF representations of continuous domain spatial models

Computation fundamentals of discrete GMRF representations of continuous domain spatial models Computation fundamentals of discrete GMRF representations of continuous domain spatial models Finn Lindgren September 23 2015 v0.2.2 Abstract The fundamental formulas and algorithms for Bayesian spatial

More information

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3].

Carnegie Mellon University Forbes Ave. Pittsburgh, PA 15213, USA. fmunos, leemon, V (x)ln + max. cost functional [3]. Gradient Descent Approaches to Neural-Net-Based Solutions of the Hamilton-Jacobi-Bellman Equation Remi Munos, Leemon C. Baird and Andrew W. Moore Robotics Institute and Computer Science Department, Carnegie

More information

Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations

Shooting methods for numerical solutions of control problems constrained. by linear and nonlinear hyperbolic partial differential equations Shooting methods for numerical solutions of control problems constrained by linear and nonlinear hyperbolic partial differential equations by Sung-Dae Yang A dissertation submitted to the graduate faculty

More information

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 ) Electronic Journal of Differential Equations, Vol. 2006(2006), No. 5, pp. 6. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) A COUNTEREXAMPLE

More information

Rough Burgers-like equations with multiplicative noise

Rough Burgers-like equations with multiplicative noise Rough Burgers-like equations with multiplicative noise Martin Hairer Hendrik Weber Mathematics Institute University of Warwick Bielefeld, 3.11.21 Burgers-like equation Aim: Existence/Uniqueness for du

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Fourier transform of tempered distributions

Fourier transform of tempered distributions Fourier transform of tempered distributions 1 Test functions and distributions As we have seen before, many functions are not classical in the sense that they cannot be evaluated at any point. For eample,

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Regression with Input-Dependent Noise: A Bayesian Treatment

Regression with Input-Dependent Noise: A Bayesian Treatment Regression with Input-Dependent oise: A Bayesian Treatment Christopher M. Bishop C.M.BishopGaston.ac.uk Cazhaow S. Qazaz qazazcsgaston.ac.uk eural Computing Research Group Aston University, Birmingham,

More information

Bayesian Inference of Noise Levels in Regression

Bayesian Inference of Noise Levels in Regression Bayesian Inference of Noise Levels in Regression Christopher M. Bishop Microsoft Research, 7 J. J. Thomson Avenue, Cambridge, CB FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop

More information

On the Goodness-of-Fit Tests for Some Continuous Time Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

DIFFERENTIAL EQUATIONS WITH A DIFFERENCE QUOTIENT

DIFFERENTIAL EQUATIONS WITH A DIFFERENCE QUOTIENT Electronic Journal of Differential Equations, Vol. 217 (217, No. 227, pp. 1 42. ISSN: 172-6691. URL: http://ejde.math.tstate.edu or http://ejde.math.unt.edu DIFFERENTIAL EQUATIONS WITH A DIFFERENCE QUOTIENT

More information

Properties of the Scattering Transform on the Real Line

Properties of the Scattering Transform on the Real Line Journal of Mathematical Analysis and Applications 58, 3 43 (001 doi:10.1006/jmaa.000.7375, available online at http://www.idealibrary.com on Properties of the Scattering Transform on the Real Line Michael

More information

Asymptotic Analysis 88 (2014) DOI /ASY IOS Press

Asymptotic Analysis 88 (2014) DOI /ASY IOS Press Asymptotic Analysis 88 4) 5 DOI.333/ASY-4 IOS Press Smoluchowski Kramers approximation and large deviations for infinite dimensional gradient systems Sandra Cerrai and Michael Salins Department of Mathematics,

More information

Generalized Post-Widder inversion formula with application to statistics

Generalized Post-Widder inversion formula with application to statistics arxiv:5.9298v [math.st] 3 ov 25 Generalized Post-Widder inversion formula with application to statistics Denis Belomestny, Hilmar Mai 2, John Schoenmakers 3 August 24, 28 Abstract In this work we derive

More information

Riemann Solver for a Kinematic Wave Traffic Model with Discontinuous Flux

Riemann Solver for a Kinematic Wave Traffic Model with Discontinuous Flux Riemann Solver for a Kinematic Wave Traffic Model with Discontinuous Flu Jeffrey K. Wiens a,, John M. Stockie a, JF Williams a a Department of Mathematics, Simon Fraser University, 8888 University Drive,

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Color Scheme. swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14

Color Scheme.   swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July / 14 Color Scheme www.cs.wisc.edu/ swright/pcmi/ M. Figueiredo and S. Wright () Inference and Optimization PCMI, July 2016 1 / 14 Statistical Inference via Optimization Many problems in statistical inference

More information

Flow and Transport. c(s, t)s ds,

Flow and Transport. c(s, t)s ds, Flow and Transport 1. The Transport Equation We shall describe the transport of a dissolved chemical by water that is traveling with uniform velocity ν through a long thin tube G with uniform cross section

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Probing the covariance matrix

Probing the covariance matrix Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

The Hilbert transform

The Hilbert transform The Hilbert transform Definition and properties ecall the distribution pv(, defined by pv(/(ϕ := lim ɛ ɛ ϕ( d. The Hilbert transform is defined via the convolution with pv(/, namely (Hf( := π lim f( t

More information

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is

FOURIER INVERSION. an additive character in each of its arguments. The Fourier transform of f is FOURIER INVERSION 1. The Fourier Transform and the Inverse Fourier Transform Consider functions f, g : R n C, and consider the bilinear, symmetric function ψ : R n R n C, ψ(, ) = ep(2πi ), an additive

More information

Stochastic Homogenization for Reaction-Diffusion Equations

Stochastic Homogenization for Reaction-Diffusion Equations Stochastic Homogenization for Reaction-Diffusion Equations Jessica Lin McGill University Joint Work with Andrej Zlatoš June 18, 2018 Motivation: Forest Fires ç ç ç ç ç ç ç ç ç ç Motivation: Forest Fires

More information

NONLOCAL DIFFUSION EQUATIONS

NONLOCAL DIFFUSION EQUATIONS NONLOCAL DIFFUSION EQUATIONS JULIO D. ROSSI (ALICANTE, SPAIN AND BUENOS AIRES, ARGENTINA) jrossi@dm.uba.ar http://mate.dm.uba.ar/ jrossi 2011 Non-local diffusion. The function J. Let J : R N R, nonnegative,

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

NEW ASYMPTOTIC EXPANSION AND ERROR BOUND FOR STIRLING FORMULA OF RECIPROCAL GAMMA FUNCTION

NEW ASYMPTOTIC EXPANSION AND ERROR BOUND FOR STIRLING FORMULA OF RECIPROCAL GAMMA FUNCTION M athematical Inequalities & Applications Volume, Number 4 8), 957 965 doi:.753/mia-8--65 NEW ASYMPTOTIC EXPANSION AND ERROR BOUND FOR STIRLING FORMULA OF RECIPROCAL GAMMA FUNCTION PEDRO J. PAGOLA Communicated

More information

A NUMERICAL STUDY FOR THE PERFORMANCE OF THE RUNGE-KUTTA FINITE DIFFERENCE METHOD BASED ON DIFFERENT NUMERICAL HAMILTONIANS

A NUMERICAL STUDY FOR THE PERFORMANCE OF THE RUNGE-KUTTA FINITE DIFFERENCE METHOD BASED ON DIFFERENT NUMERICAL HAMILTONIANS A NUMERICAL STUDY FOR THE PERFORMANCE OF THE RUNGE-KUTTA FINITE DIFFERENCE METHOD BASED ON DIFFERENT NUMERICAL HAMILTONIANS HASEENA AHMED AND HAILIANG LIU Abstract. High resolution finite difference methods

More information