Moments for the parabolic Anderson model: on a result by Hu and Nualart.

Size: px
Start display at page:

Download "Moments for the parabolic Anderson model: on a result by Hu and Nualart."

Transcription

1 Moments for the parabolic Anderson model: on a result by Hu and Nualart. Daniel Conus November 2 Abstract We consider the Parabolic Anderson Model tu Lu + uẇ, where L is the generator of a Lévy Process and Ẇ is a white noise in time, possibly correlated in space. We present an alternate proof and an extension to a result by Hu and Nualart ([4 giving explicit expressions for moments of the solution. We do not consider a Feynman-Kac representation, but rather make a recursive use of Itô s formula. Keywords and phrases. Stochastic partial differential equations, moment formulae, Parabolic Anderson Model. Introduction We consider the Parabolic Anderson Model, namely the Stochastic Partial Differential Equation given by u t (t, x Lu(t, x + u(t, xẇ (t, x, t >, x d, (. where L is the generator of a real-valued Lévy Process (X t t. The noise Ẇ (t, x is white in time and possibly correlated in space, with covariance function informally given by E[Ẇ (t, xẇ (s, y δ (t sf(x y, where f is a (possibly generalized non-negative symmetric function on d \ {}. We consider a nonrandom, bounded and measurable initial condition u : d. It is well-known that this equation admits a random-field solution {u(t, x : t >, x d } such that sup tt sup x d E[u(t, x 2 <, provided that µ(dξ + 2 e Ψ(ξ <, where Ψ(ξ E[e iξx is the Lévy exponent of X and µ(dξ is the Fourier transform of f, usually called the spectral measure of the noise. Equation (. arises in different contexts. It is the continuous form of the Parabolic Anderson Model studied by Carmona and Molchanov ([3. We also would like to mention the major role played by Department of Mathematics, Lehigh University, 4 East Packer Avenue, Bethlehem, PA 85 (USA, daniel.conus@lehigh.edu.

2 equation (. in the study of the so-called KPZ equation of physics ([5. In particular, intermittency and chaos properties of the solution to (. are of importance. We refer to [2, [6, [7 for more details about these properties. The purpose of this paper is to obtain explicit expressions for the moments of the solution u to (.. Such results have already been obtained in the particular case where L is the Laplacian; the generator of Brownian Motion and f δ the Dirac measure in [. A similar formula has been developped in [4 in the case where L and the noise is fractional in time, white in space. Both [ and [4 use a Feynman-Kac type representation of the solution to deduce the behavior of the moments. In this paper, we will follow a different approach and obtain results through a direct computation based on an iterative use of Itô s formula. We will only consider a white noise in time, but we will not restrict our attention to the case where L is the Laplacian only. We would like to point out that the results of this paper are used in the proofs of results of [7 related to the chaotic behavior of the Parabolic Anderson Model with spatially correlated noise. It is also believed that this method of proof could lead to moment formulae for SPDEs of parabolic type with a general non-linearity σ(u, if σ is a sufficiently nice function. This is subject to ongoing research. Section 2 below is a short reminder about the Parabolic Anderson Model, existence, uniqueness and series representation of the solution. We also recall a few results about Lévy processes. The moment formulae of Theorem 3. and its Corollary 3.2 constitute the main results of this paper. They are stated and proved in Section 3 in the case where the spatial covariance of the noise is a measurable function. Section 4 is devoted to the case of space-time white noise (Theorem Parabolic Anderson model In this section, we are going to remind a few known results about the Parabolic Anderson model and Lévy processes in general. Let s start by setting the framework in which we are going to work. We remind that L denotes the generator of a symmetric Lévy process (X t t on d. For instance, one can consider L, the Laplacian operator : it is the generator of Brownian Motion (B t t. Let s assume first that the spatial covariance is a measurable function f : d \ {}. We ask f to be locally integrable around. Let D( d+ be the space of C functions with compact support on d+. Let W {W (φ, φ D( d+ } be a centered Gaussian noise with covariance functional given by E[W (φw (ψ dt dx dy φ(t, xf(x yψ(t, y. d d This noise can be extended to a worthy martingale measure in the sense of Walsh (see [9 and [6 for details. Hence, by the theory of Dalang [8, we can define stochastic integrals with respect to the noise W. We notice that the covariance functional can be written as E[W (φw (ψ dt dx f(x(φ(t, ψ(t, (x, d (2. where denotes spatial convolution and ψ(t, x ψ(t, x for all x d. Using the representation (2., we can then define the noise W in the case where f is a finite measure, using E[W (φw (ψ dt f(dx (φ(t, ψ(t, (x. d We will consider a random-field solution to (., i.e. a jointly measurable stochastic process (u(t, x t,x d such that sup tt,x d E[u(t, x 2 <, for every fixed T > and satsifying the mild-form equation t u(t, x ( p t u (x + p t s (y xu(s, yw (ds, dy, d 2

3 where p t is the fundamental solution of the homogeneous equation u t (t, x Lu(t, x, p t(x p t ( x for all x d and the stochastic integral is taken in the sense of Walsh [6. Notice that since L is the generator of a Lévy process (X t t, the fundamental solution p t corresponds to the law of X t. We assume that the Lévy process X admits densities, so that p t is a well-defined measurable function. An extension to the case where p t is a measure would require to work with integrals in the spectral domain. Such an extension is not discussed here, but an insight can be found in [5, Section 6. We have the following existence and uniqueness result. Proposition 2.. Let µ be the spectral measure of the noise and Ψ the Lévy exponent of the process generated by L. If µ(dξ <, ( e Ψ(ξ then there exists a unique random-field solution to (.. d For a proof of this result, we refer to [8, [ and [6. In the case where f is a measurable function, condition (2.2 implies that [ t t E X ds f(x s ( X s (2 ds p s (xf(x yp s (y <, d where X ( and X (2 are two independent copies of the Lévy process generated by L and E X is the expectation with respect to these processes. This implies that the additive functional A f t : t f( X s ds (associated to the function f of the Lévy process X : X ( X (2 is well-defined. Since the Lévy process X is symmetric, its Lévy exponent is real given by 2 e Ψ(ξ. A direct computation allows to prove that A f t L p (Ω for all p. In the case where f δ, we have µ(dξ dξ and (2.2 becomes a standard condition for existence of local times for the Lévy process X. See [2, Theorem, p.26. We denote by (L x t, t >, x d the local times of the symmetrized process X. We informally have L x t t δ x (X s ( X s (2 ds. We also refer to the paper by Foondun, Khoshnevisan and Nualart [3 on the connection between existence of local times for Lévy processes and existence of solutions to SPDE s driven by space-time white noise. We would like to point out the fact that the proof of Proposition 2. shows that the solution u is given as the limit of a Picard iteration scheme. Namely, we set u (t, x ( p t u (x for all t, x d. Then, for all n, we define recursively a sequence of stochastic process (u n (t, x t,x d by u n (t, x u (t, x + t d p t s (y xu n (s, yw (ds, dy. Then, u(t, x lim n u n (t, x in L 2 (Ω, uniformly over t [, T and x d. One can show that the convergence also occurs in L p (Ω for all p > 2 (see [8 for details. We are now going to show that the solution u to (. can be written as a series of iterated integrals. This expansion, which corresponds to the Wiener-chaos expansion of u, will be the main tool used in order to obtain explicit expressions for the moments of u. This result is a direct consequence of the existence result and already appears in different contexts, among which [ and [5 and the wide litterature of Malliavin Calculus. Proposition 2.2. Under the assumptions above, we can show that the solution u to (. is given by u(t, x v n (t, x, n a.s. 3

4 where v u is deterministic and the processes (v n (t, x : t, x d are defined recursively by t v n (t, x p t s (y xv n (s, yw (ds, dy. (2.3 d The series is convergent in L 2 (Ω. Proof. Fix t and x d. Using the Picard iteration scheme, we have t u n (t, x u (t, x + p t s (y xu n (s, yw (ds, dy d As a consequence, u n+ (t, x u n (t, x t d p t s (y x(u n (s, y u n (s, yw (ds, dy. Hence, we set v n (t, x u n (t, x u n (t, x for all n. Then, (2.3 is satisfied for n 2 and we have u n (t, x u (t, x + n (u j (t, x u j (t, x u (t, x + j n v j (t, x j n v j (t, x, provided we set v u, which implies that (2.3 is satisfied for n as well. Finally, taking the limit as n goes to establishes the result. emark 2.3. The process v n is a n-times iterated stochastic integral. As a consequence, 2.2 shows that u is given as a series of iterated stochastic integrals. This also helps finding moments if one considers a hyperbolic equation, see [5 and [4. The expansion obtained in Proposition 2.2 corresponds to the Wiener-chaos expansion of the solution u(t, x to (.. Indeed, since v n is an n-th iterated stochastic integral, it belongs to the n-th Wiener chaos. 3 Moment formula for a measurable covariance. In this section, we will consider the covariance f : d \ {} to be a measurable function, locally integrable on d \ {}. For instance, one can consider bounded functions, such as f(x e x or functions which are unbounded at x, such as the iesz kernels, given by f(x x α for < α < d 2. We are going to establish explicit formulas for moments of the processes v n. From this, we will use the series expansion stated in Proposition 2.2 to deduce a formula for the moments of the solution u. First of all, we would like to point out that it is possible to write explicit expressions for moments of v n in terms of the Fourier transform of p t, since Fp t (ξ e tψ(ξ. This is not precisely the formula that we would like to obtain here. For details, we refer to [4, Lemma 5.2, which mainly concerns the hyperbolic equation but that applies without restrictions to the parabolic case. We are going to use similar techniques as in the proof of [4, Lemma 5.2 but we will not write the integrals in spectral form and rather express those as functionals of the Lévy process associated to the operator L. We are now ready to state our results. Theorem 3. below states the formula for the expectation of the product of u taken at different points in space. Corollary 3.2 gives the formula for a moment of order p. Theorem 3.. Let u denote the solution of (. with operator L as given in Proposition 2.2. Let t and x,..., x p d. Then, E u(t, x j Ex X,...,x p u (X (i p t t exp drf(x r (j X r (k. (3. j i j,k j k j 4

5 where the processes (X (j t t (j,..., p are p independent copies of the Lévy process generated by the operator L and Ex X,...,x p is the expectation with respect to the law of these processes conditionned such that X ( x,..., X (p x p. Corollary 3.2. Let u denote the solution of (. with operator L as given in Proposition 2.2. Let t and x d. Then, E [u(t, x p Ex X u (X (i p t t exp dr f(x r (j X r (k, (3.2 i where the processes (X (j t t (j,..., p are p independent copies of the Levy process generated by the operator L and Ex X is the expectation with respect to the law of these processes conditionned such that X ( X (p x. In order to prove Theorem 3., we need to go through a series of partial results, the most important of which is Proposition 3.7 below. First of all, let us define the following processes. For x d, t and s t, set w (s; t, x : v (t, x u (t, x ( p t u (x. Then, for n, x d, t and s t, we set s w n (s; t, x : p t r (y xv n (r, yw (dr, dy. (3.3 d Obviously, we have v n (t, x w n (t; t, x. The point of this construction is that, by the defnition of Walsh stochastic integrals, the processes s w n (s; t, x are martingales for each fixed t, x d and n N. We start with a lemma. j,k j k Lemma 3.3. Let (X t t be the Lévy process with generator L, then v (t, x + y E X y [u (x + X t, where Ey X denotes the expectation with respect to the law of X under the condition X y. Proof. ecall that we assume that the process X admits densities. We have v (t, x + y dz p t (z x yu (z d dz p t (z yu (x + z Ey X [u (x + X t, d as p t is the density of the process X t starting at. We remind the Markov property for Lévy processes in the form that we are going to use in this paper. Proposition 3.4 (Markov Property. Let (X t t be a Lévy process with values in d and g : d a bounded continuous function. Let (F t t be the filtration generated by X. Then, the following Markov property holds E X X t s [g(x s E X [g(x t F t s, where s t and E X y denotes the expectation with respect to the law of X under the condition X y. We refer to [2 for more details about the Markov property for Lévy processes. In order to prove Proposition 3.7 below, we will need a Markov property stated in a slightly different way. Namely, we will need to consider a functional of two independent processes conditionned at two different times. Lemma 3.5 (Markov Property. Let (X ( t t and (X (2 t t be two independent Lévy processes with values in d. Let (F (i t t be the filtration generated by X (i (i, 2. Let g : ( d n ( d m be a bounded continuous function. Then, for all r, r 2, t,..., t n, s,..., s m +, we have [ E X g(x ( X r (,X r (2 t,..., X ( t n, X s (2,..., X s (2 m 2 [ E, X g(x ( t +r,..., X ( t n+r, X (2 s +r 2,..., X (2 s m+r 2 F r ( F r (2 2, 5

6 where Ex X,x 2 denotes the expectation with respect to the joint law of X (, X (2 under the condition X ( x, X (2 x 2. Proof. Without loss of generality, we can show the result when n m. The general case works the same way. As the processes X ( and X (2 are independent, E X [Y F ( r F (2 r 2 E X( [E X(2 [Y F (2 r 2 F ( r. Hence, by the Markov property for X (2 first and then for X (, we have [ E, X g(x ( t+r, X (2 s+r 2 F r ( F r (2 2 E [E X( X(2 [g(x ( E X( [ E X( X r ( t+r, X (2 s+r 2 F r (2 2 F ( [E X(2 [g(x ( X r (2 t+r, X s (2 F r ( 2 E X(2 [g(x ( X r (2 t, X s (2 2 E X [g(x ( X r (,X r (2 t, X s (2. 2 r emark 3.6. We notice that the use of the Markov property above is completely formal when the function g is bounded. In the proofs below, we may want to consider the Markov property for a function that is unbounded at x. We can then consider the function g n (x : g(x n, apply the Markov property and then pass to the limit as n. This procedure will be valid in the proofs below since E[( t f( X s ds p < for all p by the assumption of the existence result (Proposition 2.. We will use this procedure throughout the proofs below without developping the details. We can now turn to the first intermediate result of this section, a moment formula for the processes (w n n N. Proposition 3.7. Fix an integer m and n,..., n m positive integers. t,..., t m +. Let s such that s t t m. Set N m i n i. [ m If N is odd, then E j w n j (s; t j, x j. Fix x,..., x m d and If N is even, set n N 2. Then, E w nj (s; t j, x j j P(n,...,n m s E X dr j rn u (x j + X (j t j (3.4 n i f(x pi + X (pi t pi r i x qi X (qi t qi r i, where P(n,..., n m is the set of all orderings in pairs of different integers of the set N (n,..., n m {,...,, 2,..., 2,..., m,..., m} with n i occurences of the integer i. More precisely, P(n,..., n m {((p, q,..., (p n, q n : {p, q,..., p n, q n } N (n,..., n m and p j q j, j,..., n}. The processes (X (j t t (j,..., m are m independent copies of the Lévy process generated by L, starting at and E X denotes expectation with respect to the joint law of these processes. If the set P(n,..., n m, then the expectation vanishes. 6

7 emark 3.8. Proposition 3.7 is the main ingredient in order to obtain explicit expression for moments. The proof looks technical, but the idea is rather simple. It will be done by induction, first on the number m of terms in the product and, second, on the total order N of the terms. A recursive use of Itô s formula allows to reduce the order of the terms in the product. Then, properties of Walsh integrals and the Markov property for Lévy processes (Proposition 3.4 allow to compute explicitly the moments considered. The recursive use of Itô s formula is similar to [5 (see also [4, where the spectral representation of p is used. Here, we use the Lévy process X to express the integrals as expectations of additive functionals of the symmetrized Lévy process X. Proof. Let us start the proof with the case m which is handled as a particular case. In that case, by the martingale property of the stochastic integral, we have E[w n (s; t, x for all x d, s t and this proves the result if n is odd. If n is even, the left-hand side of (3.4 vanishes. Moreover, for n N, the set N (n is formed with n occurences of the integer. It is not possible to find any pairing of different integers from this set and, hence, P(n and the right-hand side of (3.4 vanishes as well. We are now going to prove the result by induction on m. We first consider the case m 2, for which we are going to obtain the result by induction on N 2 i n i. The smallest possible value of N is N 2, with n, n 2. In that case, using the definition of w and the properties of Walsh stochastic integrals, we have s E[w (s; t, x w (s; t 2, x 2 dr dy dz p t r(y x f(y zp t2 r(z x 2 v (r, yv (r, z. d d Using the fact that p t is the density of the Lévy process (X t t starting at and Lemma 3.3, we have E[w (s; t, x w (s; t 2, x 2 s s dre X [f(x + X ( t r x 2 X (2 t 2 rv (r, x + X ( t rv (r, x 2 + X (2 t 2 r [ dre X f(x + X ( t r x 2 X (2 t 2 re X [u X ( (x + X r ( E X [u t r X (2 (x 2 + X r (2 t 2 r Now, let (F (i t t denote the filtration generated by X (i. By the Markov properties (Theorem 3.4 for the processes X ( and X (2 and their independence, we have E[w (s; t, x w (s; t 2, x 2 s s s s dre X [ f(x + X ( t r x 2 X (2 t 2 re X [u (x + X ( t F ( t re X [u (x 2 + X (2 t 2 F (2 t 2 r [ dre X f(x + X ( t r x 2 X (2 t 2 re X [u (x + X ( t u (x 2 + X (2 t 2 F ( t r F (2 t 2 r [ dre X E X [f(x + X ( t r x 2 X (2 t 2 ru (x + X ( t u (x 2 + X (2 t 2 F ( t r F (2 t 2 r [ dre X f(x + X ( t r x 2 X (2 t 2 ru (x + X ( t u (x 2 + X (2 t 2, because f(x + X ( t r x 2 X (2 t 2 r is F ( t r F (2 t 2 r measurable. Finally, by Fubini s theorem, E[w (s; t, x w (s; t 2, x 2 [ E X u (x + X ( t u (x 2 + X (2 t 2 s drf(x + X ( t r x 2 X (2 t 2 r. As N (, {, 2}, P(, {(, 2} and the result is proved for m 2, N 2. Let us consider the case N 3. We have n 2, n 2 (or the other way around. In that case, using the definitions of w and w 2 and the properties of Walsh stochastic integrals, we have E[w 2 (s; t, x w (s; t 2, x 2 s dr dy dz p t r(y x f(y zp t2 r(z x 2 E[v (r, yv (r, z, d d 7.

8 as E[v (r, y for all r, y d. The result is proved for m 2, N 3. Now, let us assume that the result is proved for m 2 and all N M. Let n, n 2 N such that n + n 2 M. Then, again by the definition of w n and the properties of Walsh stochastic integrals, we have E[w n (s; t, x w n2 (s; t 2, x 2 s dr dy dz p t r(y x f(y zp t2 r(z x 2 E[v n (r, yv n2 (r, z. d d If M is odd, then (n + (n 2 M 2 is odd as well and E[v n (r, yv n2 (r, z for all r, y, z d by the induction assumption applied with t t 2 r, x y, x 2 z. If M is even, then (n + (n 2 M 2 is even, and we can write E[v n (r, yv n2 (r, z using the induction assumption with t t 2 r, x y, x 2 z. First, using the fact that p t is the density of X, we have E[w n (s; t, x w n2 (s; t 2, x 2 s dr dy dz p t r(y x f(y zp t2 r(z x 2 E[v n (r, yv n2 (r, z d d s [ dre X f(x + X ( t r x 2 X (2 ( E[w n (r; r, x + yw n2 (r; r, x 2 + z t 2 r (3.5 yx ( t r,zx(2 t 2 r But, as N (n, n 2 is composed of n occurences of and n 2 occurences of 2, the only element of P(n, n 2 is ((, 2,..., (, 2. As a consequence, by the induction assumption, E[w n (r; r, x + yw n2 (r; r, x 2 + z E X u (x + y + X ( r u (x 2 + z + X (2 r r E X y,z r r rn 2 dr n j u (x + X ( r u (x 2 + X (2 r rn 2 dr rn 2 dr n j n E X y,z j f(x + X ( r r j x 2 X (2 f(x + y + X ( r r j x 2 z X (2 f(x + X ( r r j x 2 X (2 r r j u (x + X ( r u (x 2 + X (2 r r r j,. r r j, 8

9 where n M/2. Further, by Lemma 3.5, E[w n (r; r, x + yw n2 (r; r, x 2 + z r r rn 2 dr E X X ( t r,x(2 t 2 r n j f(x + X ( r r j x 2 X (2 rn 2 dr E X n j eplacing (3.6 in (3.5, we obtain E[w n (s; t, x w n2 (s; t 2, x 2 s dre X n j r r j f(x + X ( t r j x 2 X (2 f(x + X ( t r x 2 X (2 t 2 r f(x + X ( t r j x 2 X (2 t 2 r j yx ( t r,zx(2 t 2 r u (x + X ( r u (x 2 + X (2 r u (x + X ( t u (x 2 + X (2 t 2 r t 2 r j F ( t r F (2 t 2 r (3.6 rn 2 dr E X u (x + X ( t u (x 2 + X (2 t 2 F ( t r F (2 t 2 r. Using Fubini s theorem, the fact that f(x + X ( t r x 2 X (2 t 2 r is F ( t r F (2 t 2 r-measurable and renumbering the integration variables, we obtain E[w n (s; t, x w n2 (s; t 2, x 2 s E X dr rn E X u (x + X ( t u (x 2 + X (2 t 2 u (x + X ( t u (x 2 + X (2 t 2 s rn dr n j n j f(x + X ( t r j x 2 X (2 t 2 r j f(x + X ( t r j x 2 X (2 t 2 r j As the only element in the set P(n, n 2 is ((, 2,..., (, 2, this proves the result in the case where m 2. We will now suppose that the result is proved for any number of terms in the product, up to m, and consider the case where we have m terms. Again, we are going to prove the result by induction on N. The smallest possible value is N m with n n m. In that case, the process s w (s; t, x is a martingale for all t, x d. We apply Itô s formula with the function h(x,, x m m j x j, then take an expectation, which cancels the martingale term. We are only left with the expectation of. 9

10 the quadratic variation term. (For details, one can refer to [5, Proof of Lemma 6.2. We finally obtain E w (s; t j, x j j m i i j m i i j s s dr dy dz p ti r(y x i f(y zp tj r(z x j v (r, yv (r, z d d E w (r; t k, x k k dre X [f(x i + X (i t i r x j X (j t j rv (r, x + X (i t i rv (r, x j + X (j t j r E w (r; t k, x k. k We now handle the first term in the time integral as in the case m 2 using Lemma 3.3. The second term is the expectation of a product of m 2 terms. Hence, we can use the induction assumption on m to express it. If m is odd, then the second expectation in the integral vanishes and the whole expression as well. If m is even, we can use (3.4. Let P i,j denote the set of ordered pairs of different integers of the set N (n,..., n m, from which we delete the occurence of each i and j. We finally obtain E w (s; t j, x j j m i i j Pi,j s E X dre X [u (x i + X (i t i k u (x k + X (k t k u (x j + X (j t j f(x i + X (i t i r x j X (j t j r r rn 2 dr n l f(x pl + X (p l t pl r l x ql X (q l t ql r l, where n m/2. By Fubini s theorem, a renumbering of the integration variables and the fact that m i i j {((i, j, p} P(,...,, }{{} p P i,j m times the result is proved for N m. Now consider the case where N m+. Then, without loss of generality, we can suppose that n 2

11 and n 2 n m. In that case, Itô s formula shows that E w 2 (s; t, x w (s; t j, x j m i i2 j2 + m j2 s j2 s dr dy dz p ti r(y x i f(y zp tj r(z x j v (r, yv (r, z d d E w 2 (r; t, x w (r; t k, x k dr dy d E w 2 (s; t, x w (s; t j, x j m i i2 j2 + m j2 s j2 s k2 dz p t r(y x f(y zp tj r(z x j v (r, z d E w (r; r, y w (r; t k, x k, k2 k j dre X [f(x i + X (i t i r x j X (j t j rv (r, x + X (i t i rv (r, x j + X (j t j r E w 2 (r; t, x dre X w (r; t k, x k k2 f(x + X ( t r x j X (j t j rv (r, x j + X (j t j r E w (r; r, y w (r; t k, x k k2 k j yx ( t r. (3.7 First, we can see that if m is even, then N m + is odd. In that case, the last expectation in the first term in (3.7 corresponds to the case m 2, N m and hence vanishes by induction. The last expectation in brackets in the second term of (3.7 corresponds to the case m, N m and vanishes as well. Hence, the result is true if m is even. Now, if m is odd, we can handle the first term above with Lemma 3.3 and the induction assumption, since the second expectation does not depend on X (i and X (j. For the second term, we first use the induction assumption, then Lemma 3.5 (Markov property and Fubini s theorem. The arguments are analogous to those in the case m 2 and we skip the details. This proves the result for N m +. Now, suppose that the result is true for all N M and pick n,..., n m N such that m i n i M.

12 By Itô s formula, we have E w nj (s; t j, x j j m i i j s dr dy dz p ti r(y x i f(y zp tj r(z x j (3.8 d d E w ni (r; r, yw nj (r; r, z w nk (r; t k, x k. Now, if N m j n j is odd, then n i + n j + m k n k N 2 is odd as well and the expectation above vanishes by induction. The result is proved for N odd. If N is even, then N 2 is even as well and we can use the induction assumption to compute the expectation in (3.8. We obtain E w nj (s; t j, x j j m i i j s dre X k f(x i + X (i t i r x j X (j t i r (3.9 E w ni (r; r, x i + yw nj (r; r, x j + z w nk (r; t k, x k k yx (i t i r,zx(j t j r Let x k x k x i + y x j + z if k i, j if k i if k j and t k { tk if k i, j r if k i, j. 2

13 Then, by the induction assumption with ( t,..., t m and ( x,..., x m, E w n (r; r, x + yw n2 (r; r, x 2 + z w nk (r; t k, x k P(n,...,n i,...,n j,...,n m r rn 2 dr P(n,...,n i,...,n j,...,n m r rn 2 dr P(n,...,n i,...,n j,...,n m k E X n l E X y,z n l r k u (x i + y + X r (i u (x j + z + X r (j f( x pl + X (p l t pl r l x ql X (q l t ql r l u (x i + X r (i u (x j + X r (j k f(x pl + X (p l t pl r l x ql X (q l t ql r l rn 2 dr, k u (x k + X (k t k Ey,z X u (x i + X r (i n u (x k + X (k t k f(x pl + X (p l x t pl r ql X (q l l l t ql r l.. u (x k + X (k t k u (x j + X (j r Further, by Lemma 3.5, E w n (r; r, x + yw n2 (r; r, x 2 + z w nk (r; t k, x k P(n,...,n i,...,n j,...,n m k k r rn 2 dr E X X (i t i r,x(j t j r n u (x k + X (k t k f(x pl + X (p l x t pl r ql X (q l l P(n,...,n i,...,n j,...,n m n l l r f(x pl + X (p l t pl r l x ql X (q l t ql r l yx (i t i r,zx(j t j r u (x i + X (i [ rn 2 m dr E X u (x k + X (k t k k t ql r l F (i t i r F (j t j r r u (x j + X (j r, (3. 3

14 because t pl r l + t pl r t pl r k, whenever p l i or j. eplacing (3. in (3.9, we obtain E w nj (s; t j, x j j m i i j s E X [ m k dre X f(x i + X (i t i r x j X (j t j r P(n,...,n i,...,n j,...,n m n u (x k + X (k t k f(x pl + X (p l t pl r l x ql X (q l l t ql r l r rn 2 dr F (i t i r F (j t j r. Using Fubini s theorem, the fact that f(x i +X (i t i r x j X (j t j r is F (i t i r F (j t j r-measurable, renumbering the integration variables and the fact that we obtain E w nj (s; t j, x j j P(n,...,n m P(n,...,n m s m i i j p P(n,...,n i,...,n j,...,n m [ rn m dr E X u (x k + X (k k E X [ m k u (x k + X (k t k s rn dr {((i, j, p} P(n,..., n m, t k n l n l f(x pl + X (p l t pl r l x ql X (q l t ql r l f(x pl + X (p l t pl r l x ql X (q l t ql r l One should notice that in the case where one of the n j s is equal to, then the argument is similar but slightly different. Indeed, w nj w and, hence, w nj comes out of the expectation. We can still apply the induction assumption for the expectation, but we also have to apply Lemma 3.3 for the additional w outside the expectation. The result is proved. In Proposition 3.7, we assumed that the n j s were all positive. But, in the general case, there might be some of the n j s being equal to. In Proposition 3.9, we show that the same expression is valid in that case. Proposition 3.9. Fix an integer M and n,..., n M non-negative integers. Fix x,..., x M d and t,..., t M +. Let s such that s t t M. Set N M i n i. [ M If N is odd, then E j w n j (s; t j, x j.. If N is even, set n N 2. Then, M E w nj (s; t j, x j j P(n,...,n M s dr E X M j rn u (x j + X (j t j (3. n i f(x pi + X (pi t pi r i x qi X (qi t qi r i, where the notations are those of Proposition 3.7. If the set P(n,..., n M, then the expectation vanishes. 4

15 Proof. The case where all n,..., n m are positive is proved in Proposition 3.7. Without loss of generality, suppose that there exists m such that n j for all m + j M and n j > for j m. Then, as w is deterministic, ( M M E w nj (s; t j, x j w (s; t k, x k E w nj (s; t j, x j j km+ Now, we can use Proposition 3.7 to compute the expectation and Lemma 3.3 to compute the product outside the expectation. We obtain M E w nj (s; t j, x j j ( M km+ s E X [u (x k + X (k t k P(n,...,n m rn dr E X M s j P(n,...,n m n i u (x j + X (j t j dr rn E X j j u (x j + X (j t j f(x pi + X (pi t pi r i x qi X (qi t qi r i n i f(x pi + X (pi t pi r i x qi X (qi t qi r i. The result is proved since P(n,..., n M P(n,..., n m when n m+ n M. Now, we can simplify our expression by coming back to the processes v n rather than w n. Corollary 3.. Fix an integer m and n,..., n m non-negative integers. Fix x,..., x m d and t +. Set N m i n i. [ m If N is odd, then E j v n j (t, x j. If N is even, set n N 2. Then, E v nj (t, x j j P(n,...,n m t E X j rn dr u (x i + X (i t (3.2 n j f(x pi + X (pi x qi X (qi where the notations are those of Proposition 3.7. If the set P(n,..., n M, then the expectation vanishes., Proof. Set s t t m t in Proposition 3.9. emark 3.. We would like to point out that another way to write down (3.2 is: t rn n E v nj (t, x j Ex X,...,x m u (X (i t dr f(x (pi X (qi j P(n,...,n m j j, 5

16 where Ex X,...,x m denotes the expectation with respect to the joint law of the processes X (j (j,..., n under the conditions X ( x,..., X (m x m. We also recall the following result from eal Analysis. The proof is not very difficult, hence we leave it to the reader, since it is beyond the scope of this paper. Lemma 3.2. Let g,..., g n be integrable functions on +. Suppose that the n functions are divided in l groups of respectively k,..., k l identical functions (k + +k l n. Then, for all t, the following result holds t rn n t dr g π( (r g π(n (r n dr g i (r, k! k l! π S n(k,...,k l where S n (k,..., k l is the set of all permutations of n objects divided in l groups of respectively k,..., k l identical objects. Namely, S n (k,..., k l n! k! k l!. We are now ready to prove Theorem 3.. Proof of Theorem 3.. By Proposition 2.2, we know that u(t, x n v n(t, x and the series converges in L p (Ω, for all p 2. Hence, E u(t, x j E v nj (t, x j j n n p j i Further, by Corollary 3. and distinguishing the vectors (n,..., n p depending on the sum of their components, E u(t, x j j n t n p P(n,...,n p rn dr { p j nj is even.} E X n i E X n n,...,np P(n p,...,n p j n j 2n t rn n E X dr j u (x j + X (j t i n n,...,np P(n p,...,n p j n j 2n j u (x j + X (j t f(x pi + X (pi x qi X (qi j u (x j + X (j t f(x pi + X (pi x qi X (qi t rn dr n i f(x pi + X (pi x qi X (qi Now, let dp(n,..., n p denote the set of pairings in P(n,..., n p but without taking the order into account. (For instance,((,2,(,3 and ((,3,(,2 are different pairings in P(2,,, but are the same. 6

17 in dp(2,,. We have E u(t, x j j E X j u (x j + X (j t dp(n,...,n p all possible orders of the pairing n n,...,np p j n j 2n t rn dr n i f(x pi + X (pi x qi X (qi. Then, by Lemma 3.2, we have E u(t, x j j E X j dp(n,...,n p E X j dp(n,...,n p u (x j + X (j t k! k l! u (x j + X (j t k! k l! n n,...,np p j n j 2n t n i dr f(x pi + X (pi t r x qi X (qi n n,...,np p j n j 2n t n i dr f(x pi + X (pi r t r, x qi X (qi, where k,..., k l are the numbers of identical pairs coming in the pairing considered. Now, using the fact that n,...,np p j n j 2n dp(n,..., n p {((p, q,..., (p n, q n : p j q j ; p,..., p n, q,..., q n p, without taking the order into account} r 7

18 n! and that there are k! k l! ways to order a particular pairing, we have E u(t, x j j E X j E X j E X j u (x j + X (j t u (x j + X (j t u (x j + X (j t n n n! n! exp p p,q p q p p,q p q p p,q p q t p pn,qn pn qn t n i t drf(x p + X (p r drf(x p + X (p r drf(x pi + X (pi r x q X (q x qi X (qi, r x q X (q. r n After conditionning on X ( x,..., X (p x p, the result is proved., r Proof of Corollary 3.2. Take x x p x in Theorem Space-time white noise and Lévy local times. In this section, we will consider the case where the covariance f δ is the Dirac measure at. Hence, the noise W is a space-time white noise and is defined as a centered Gaussian noise {W (φ : φ D( + } with covariance functional given by E[W (φw (ψ dt δ (dx (φ(t, ψ(t, (x dt dξ Fφ(t, ξfψ(t, ξ. We notice that no solution exists in dimension d 2 for space-time white noise, hence we restrict our attention to the case d. The moment formula is given by Theorem 4. below. Theorem 4.. Let u denote the solution of (. with operator L, driven by a space-time white-noise. Let t and x. Let (X (j t t (j,..., p be p independent copies of the Lévy process generated by L. Let L t (j, k be the local time at of the process X (j X (k. Informally, Then, L t (j, k i t dr δ (X (j r X (k r. E [u(t, x p E X u (x + X (i t exp p L t (j, k. (4. j,k j k In order to prove Theorem 4., we would like to apply the moment formulae of Section 3. Therefore, we need to smooth the covariance of the noise through a convolution procedure. Namely, let ϕ : be the standard gaussian kernel. Namely, ϕ(x (2π /2 exp( x 2 /2. 8

19 Let ϕ ε (x : ε /2 ϕ(x/ ε. Notice that the noise W can be extended to the space of rapidly decreasing C -functions. Hence, we can define a noise W ε {W ε (φ : φ D( + } by W ε (φ : W (φ ϕ ε, (4.2 where stands for spatial convolution. The noise W ε is well-defined on the same probability space as the noise W. Now, notice that E[W ε (φw ε (ψ E[W (φ ϕ ε W (ψ ϕ ε dt δ (dx((φ(t, ϕ ε ( ψ(t, ϕ ε (x dt δ (dx(φ(t, ψ(t, ϕ 2ε (x dt dx ϕ 2ε (x(φ(t, ψ(t, (x. Hence, the noise W ε is a Gaussian noise with covariance informally given by E[Ẇε(t, xẇε(s, y δ (t sϕ 2ε (x y. Since ϕ ε is a bounded covariance function, we can apply the results of Section 3 to the solution u ε of (. with the noise W ε. Informally, ϕ 2ε converges to δ as ε. Hence, W ε is actually an approximation of the noise W, in which we have turned the covariance measure into a bounded function. We state two results which make this convergence formal. Proposition 4.2. Let Z {Z(t, x, t >, x d } be a Walsh-integrable stochastic process with respect to space-time white noise W. Then, for all p, t Z(s, yw ε(ds, dy converges in L p (Ω to Z(s, yw (ds, dy uniformly in t [, T as ε. t Proof. For a noise with covariance f, let Q W ([, t A B : t f(dx( A B (x be the covariation measure of the extension of W as a martingale measure in the sense of Walsh [6. Let φ, ψ D(. We have φ(xψ(yq Wε ([, t dx dy φ(xψ(yq W ([, t dx dy t dx ϕ 2ε (x(φ ψ(x δ (dx(φ ψ(x t dξ Fϕ 2ε (ξfφ(ξfψ(ξ dξfφ(ξfψ(ξ t dξ Fφ(ξ Fψ(ξ e εξ2 2, as ε. This shows that the covariation measure of the martingale measure W ε converges to the covariation measure of W as ε. Using the results of Walsh [6 and Dalang [8, this is enough to ensure the convergence of the stochastic integral in L p (Ω for all p 2. Proposition 4.3. The solution to (. with noise W ε converges to the solution to (. with noise W as ε in L p (Ω, uniformly on t [, T and x. Proof. This is a consequence of Proposition 4.2 and of the existence result (Proposition 2. using the Picard iteration scheme. In order to be able to state and prove the moment result in the case where f δ, we need one more result about additive functionals of Lévy processes. We remind that the existence and uniqueness result 9

20 ensures that d dξ + 2 e Ψ(ξ <. This implies that the symmetrized Lévy process ( X t t admits local times, where the process X is defined by X t : X ( t X (2 t for all t, for X ( and X (2 two independent copies of the process generated by L (see [2, Theorem, p.26. Let L x t denote the local time at x of X. Then, for any non-negative bounded function g, we have t ([2, p.26. We have the following approximation result. g( X s ds g(xl x t dx, d a.s. (4.3 Proposition 4.4. Let ( X t t be a Lévy process which admits local times (L x t : t >, x. Then, for all p, [ ( ( t p E exp ϕ 2ε ( X s ds exp(l t, as ε. Proof. We know by [2, Prop.4, p.3, that the local time L t has finite exponential moments. Hence, it suffices to prove that [ ( ( t p E exp ϕ 2ε ( X s ds L t, as ε. By (4.3 applied to g, we know that x L x t is in L ( a.s. and it admits a Fourier transform ˆL t. Using (4.3 again, t ϕ 2ε (X s ds L t ϕ 2ε (xl x t dx L x t δ (dx Fϕ 2ε (ξ ˆL t (ξ dξ 2 ˆL t (ξ dξ 2L t, (4.4 Now, since Fϕ 2ε (ξ converges to pointwise as ε, the result follows from the bound (4.4 and the Dominated Convergence Theorem, since L t has finite exponential moments. We are now ready to prove the moment formula in the case where f δ. Proof of Theorem 4.. Let u ε denote the solution to (. with noise W ε defined in (4.2. By Corollary 3.2, we know that E [u ε (t, x p E X u (x + X (i p t t exp dr ϕ 2ε (X r (j X r (k. (4.5 i Moreover, we know by Proposition 4.3 that E[u(t, x p lim ε E[u ε (t, x p. Also, by Proposition 4.4, we know that for any choice of j, k {,..., p}, exp( t ϕ 2ε(X s (j X s (k ds converges to exp(l t (j, k in L p (Ω for any p. This implies that the right-hand side of (4.5 converges to the right-hand side of (4. as ε. The result is proved by taking the limit as ε in (4.5. j,k j k 2

21 emark 4.5. In the case where L and X is a Brownian Motion, Theorem 4. corresponds to the first part of Theorem 5.3 of [4. Theorem 3. doesn t require the use of a Feynman-Kac type representation for the solution, but uses the Wiener-Chaos expansion instead. Theorem 4. is also slightly more general in the sense that it considers generators of Lévy processes and not only, although it only handles noises which are white in time. We would like to notice that the approach used in [4 can also lead to Theorem 3. in the case of Lévy processes. eferences [ Bertini L. & Cancrini N. The stochastic heat equation: Feynman-Kac formula and intermittence. Journal of Statistical Physics, Vol. 78, n 5-6 ( [2 Bertoin, J. Lévy Processes. Cambridge University Press, 996. [3 Carmona,. A. & Molchanov S.A. Parabolic Anderson Problem and Intermittency. Memoirs of the American Mathematical Society, Providence, I, 994. [4 Conus D. The non-linear stochastic wave equation in high dimensions: existence, Hölder-continuity and Itô-Taylor expansion. Ph.D Thesis n 4265, EPFL Lausanne (28. [5 Conus D. & Dalang.C. The non-linear stochastic wave equation in high dimensions. Electronic Journal of Probability, Vol. 3, 28. [6 Conus D. & Joseph M. & Khoshnevisan D. On the chaotic character of the stochastic heat equation, before the onset of intermittency. Annals of Probability, to appear (2. [7 Conus D. & Joseph M. & Khoshnevisan D. & Shiu S.-Y. On the chaotic character of the stochastic heat equation, II. Preprint (2. [8 Dalang.C. Extending martingale measure stochastic integral with applications to spatially homogeneous spde s. Electronic Journal of Probability, Vol. 4, 999. [9 Dalang.C. & Frangos N.E. The stochastic wave equation in two spatial dimensions. Annals of Probability, Vol. 26, n ( [ Dalang.C. & Mueller C. Some non-linear s.p.d.e s that are second order in time. Electronic Journal of Probability, Vol. 8, 23. [ Dalang.C. & Mueller C. & Tribe. A Feynman-Kac-type formula for the deterministic and stochastic wave equations and other p.d.e. s. Transactions of the AMS, Vol. 36, n 9 ( [2 Foondun, M. & Khoshnevisan, D. Intermittence and nonlinear parabolic stochastic partial differential equations. Electronic Journal of Probability, Vol. 4, Paper no. 2 ( [3 Foondun, M. & Khoshnevisan, D. & Nualart, E. A local-time correspondence for stochastic partial differential equations. Transactions of the American Mathematical Society 363 (2, n 5, [4 Hu Y. & Nualart D. Stochastic heat equation driven by fractional noise and local time. Probability Theory and elated Fields, Vol. 43, n -2 ( [5 Kardar, M. & Parisi, G. & Zhang, Y.-C. Dynamic scaling of growing interfaces. Physical eview Letters. 56 ( [6 Walsh J.B. An Introduction to Stochastic Partial Differential Equations. In : Ecole d Eté de Probabilités de St-Flour, XIV, 984. Lecture Notes in Mathematics 8. Springer-Verlag, Berlin, Heidelberg, New-York (986,

Weak nonmild solutions to some SPDEs

Weak nonmild solutions to some SPDEs Weak nonmild solutions to some SPDEs Daniel Conus University of Utah Davar Khoshnevisan University of Utah April 15, 21 Abstract We study the nonlinear stochastic heat equation driven by spacetime white

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

The Chaotic Character of the Stochastic Heat Equation

The Chaotic Character of the Stochastic Heat Equation The Chaotic Character of the Stochastic Heat Equation March 11, 2011 Intermittency The Stochastic Heat Equation Blowup of the solution Intermittency-Example ξ j, j = 1, 2,, 10 i.i.d. random variables Taking

More information

Linear SPDEs driven by stationary random distributions

Linear SPDEs driven by stationary random distributions Linear SPDEs driven by stationary random distributions aluca Balan University of Ottawa Workshop on Stochastic Analysis and Applications June 4-8, 2012 aluca Balan (University of Ottawa) Linear SPDEs with

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Hölder continuity of the solution to the 3-dimensional stochastic wave equation

Hölder continuity of the solution to the 3-dimensional stochastic wave equation Hölder continuity of the solution to the 3-dimensional stochastic wave equation (joint work with Yaozhong Hu and Jingyu Huang) Department of Mathematics Kansas University CBMS Conference: Analysis of Stochastic

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension

Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension Available online at www.sciencedirect.com Stochastic Processes and their Applications 22 (202) 48 447 www.elsevier.com/locate/spa Gaussian estimates for the density of the non-linear stochastic heat equation

More information

A Feynman-Kac-type formula for the deterministic and stochastic wave equations and other p.d.e. s

A Feynman-Kac-type formula for the deterministic and stochastic wave equations and other p.d.e. s A Feynman-Kac-type formula for the deterministic and stochastic wave equations and other p.d.e. s Robert C. Dalang 1 2, Carl Mueller 3 4 and Roger Tribe 5 Abstract We establish a probabilistic representation

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Multiple points of the Brownian sheet in critical dimensions

Multiple points of the Brownian sheet in critical dimensions Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Finite element approximation of the stochastic heat equation with additive noise

Finite element approximation of the stochastic heat equation with additive noise p. 1/32 Finite element approximation of the stochastic heat equation with additive noise Stig Larsson p. 2/32 Outline Stochastic heat equation with additive noise du u dt = dw, x D, t > u =, x D, t > u()

More information

The stochastic heat equation with a fractional-colored noise: existence of solution

The stochastic heat equation with a fractional-colored noise: existence of solution The stochastic heat equation with a fractional-colored noise: existence of solution Raluca Balan (Ottawa) Ciprian Tudor (Paris 1) June 11-12, 27 aluca Balan (Ottawa), Ciprian Tudor (Paris 1) Stochastic

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

SPDEs, criticality, and renormalisation

SPDEs, criticality, and renormalisation SPDEs, criticality, and renormalisation Hendrik Weber Mathematics Institute University of Warwick Potsdam, 06.11.2013 An interesting model from Physics I Ising model Spin configurations: Energy: Inverse

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

Estimates for the density of functionals of SDE s with irregular drift

Estimates for the density of functionals of SDE s with irregular drift Estimates for the density of functionals of SDE s with irregular drift Arturo KOHATSU-HIGA a, Azmi MAKHLOUF a, a Ritsumeikan University and Japan Science and Technology Agency, Japan Abstract We obtain

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

arxiv:math/ v1 [math.pr] 24 Nov 2004

arxiv:math/ v1 [math.pr] 24 Nov 2004 Exact variations for stochastic heat equations driven by space time white noise arxiv:math/411552v1 math.pr] 24 Nov 24 Abstract Jan Pospisil University of West Bohemia, New Technologies Research Centre,

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Rough paths methods 4: Application to fbm

Rough paths methods 4: Application to fbm Rough paths methods 4: Application to fbm Samy Tindel Purdue University University of Aarhus 2016 Samy T. (Purdue) Rough Paths 4 Aarhus 2016 1 / 67 Outline 1 Main result 2 Construction of the Levy area:

More information

Fractals at infinity and SPDEs (Large scale random fractals)

Fractals at infinity and SPDEs (Large scale random fractals) Fractals at infinity and SPDEs (Large scale random fractals) Kunwoo Kim (joint with Davar Khoshnevisan and Yimin Xiao) Department of Mathematics University of Utah May 18, 2014 Frontier Probability Days

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

White noise generalization of the Clark-Ocone formula under change of measure

White noise generalization of the Clark-Ocone formula under change of measure White noise generalization of the Clark-Ocone formula under change of measure Yeliz Yolcu Okur Supervisor: Prof. Bernt Øksendal Co-Advisor: Ass. Prof. Giulia Di Nunno Centre of Mathematics for Applications

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Paracontrolled KPZ equation

Paracontrolled KPZ equation Paracontrolled KPZ equation Nicolas Perkowski Humboldt Universität zu Berlin November 6th, 2015 Eighth Workshop on RDS Bielefeld Joint work with Massimiliano Gubinelli Nicolas Perkowski Paracontrolled

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Tools of stochastic calculus

Tools of stochastic calculus slides for the course Interest rate theory, University of Ljubljana, 212-13/I, part III József Gáll University of Debrecen Nov. 212 Jan. 213, Ljubljana Itô integral, summary of main facts Notations, basic

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Hairer /Gubinelli-Imkeller-Perkowski

Hairer /Gubinelli-Imkeller-Perkowski Hairer /Gubinelli-Imkeller-Perkowski Φ 4 3 I The 3D dynamic Φ 4 -model driven by space-time white noise Let us study the following real-valued stochastic PDE on (0, ) T 3, where ξ is the space-time white

More information

Density of parabolic Anderson random variable

Density of parabolic Anderson random variable Density of parabolic Anderson random variable Yaozhong Hu The University of Kansas jointly with Khoa Le 12th Workshop on Markov Processes and Related Topics (JSNU and BNU, July 13-17, 2016) Outline 1.

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

On the large-scale structure of the tall peaks for stochastic heat equations with fractional Laplacian

On the large-scale structure of the tall peaks for stochastic heat equations with fractional Laplacian On the large-scale structure of the tall peaks for stochastic heat equations with fractional Laplacian Kunwoo Kim October 19, 215 Abstract Consider stochastic heat equations with fractional Laplacian on

More information

WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION

WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION Communications on Stochastic Analysis Vol. 1, No. 2 (27) 311-32 WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION ALBERTO LANCONELLI Abstract. We derive an Itô s-type formula for

More information

ON NON-EXISTENCE OF GLOBAL SOLUTIONS TO A CLASS OF STOCHASTIC HEAT EQUATIONS

ON NON-EXISTENCE OF GLOBAL SOLUTIONS TO A CLASS OF STOCHASTIC HEAT EQUATIONS POCEEDINGS OF THE AMEICAN MATHEMATICAL SOCIETY http://dx.doi.org/1.19/proc/1236 Article electronically published on April 6, 215 ON NON-EXISTENCE OF GLOBAL SOLUTIONS TO A CLASS OF STOCHASTIC HEAT EQUATIONS

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

On comparison principle and strict positivity of solutions to the nonlinear stochastic fractional heat equations

On comparison principle and strict positivity of solutions to the nonlinear stochastic fractional heat equations On comparison principle and strict positivity of solutions to the nonlinear stochastic fractional heat equations Le Chen and Kunwoo Kim University of Utah Abstract: In this paper, we prove a sample-path

More information

Large time behavior of reaction-diffusion equations with Bessel generators

Large time behavior of reaction-diffusion equations with Bessel generators Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations

More information

Malliavin differentiability of solutions of SPDEs with Lévy white noise

Malliavin differentiability of solutions of SPDEs with Lévy white noise Malliavin differentiability of solutions of SPDEs with Lévy white noise aluca M. Balan Cheikh B. Ndongo February 2, 217 Abstract In this article, we consider a stochastic partial differential equation

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

Linear SPDEs driven by stationary random distributions

Linear SPDEs driven by stationary random distributions Linear SPDEs driven by stationary random distributions aluca M. Balan June 5, 22 Abstract Using tools from the theory of stationary random distributions developed in [9] and [35], we introduce a new class

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Some Topics in Stochastic Partial Differential Equations

Some Topics in Stochastic Partial Differential Equations Some Topics in Stochastic Partial Differential Equations November 26, 2015 L Héritage de Kiyosi Itô en perspective Franco-Japonaise, Ambassade de France au Japon Plan of talk 1 Itô s SPDE 2 TDGL equation

More information

Malliavin Calculus in Finance

Malliavin Calculus in Finance Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

Homogenization for chaotic dynamical systems

Homogenization for chaotic dynamical systems Homogenization for chaotic dynamical systems David Kelly Ian Melbourne Department of Mathematics / Renci UNC Chapel Hill Mathematics Institute University of Warwick November 3, 2013 Duke/UNC Probability

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

A LOCAL-TIME CORRESPONDENCE FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS

A LOCAL-TIME CORRESPONDENCE FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS A LOCAL-TIME CORRESPONDENCE FOR STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS MOHAMMUD FOONDUN, DAVAR KHOSHNEVISAN, AND EULALIA NUALART Abstract. It is frequently the case that a white-noise-driven parabolic

More information

Random Fields: Skorohod integral and Malliavin derivative

Random Fields: Skorohod integral and Malliavin derivative Dept. of Math. University of Oslo Pure Mathematics No. 36 ISSN 0806 2439 November 2004 Random Fields: Skorohod integral and Malliavin derivative Giulia Di Nunno 1 Oslo, 15th November 2004. Abstract We

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Weak convergence and large deviation theory

Weak convergence and large deviation theory First Prev Next Go To Go Back Full Screen Close Quit 1 Weak convergence and large deviation theory Large deviation principle Convergence in distribution The Bryc-Varadhan theorem Tightness and Prohorov

More information

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE. Michael Oberguggenberger and Danijela Rajter-Ćirić

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE. Michael Oberguggenberger and Danijela Rajter-Ćirić PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 7791 25, 7 19 STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE Michael Oberguggenberger and Danijela Rajter-Ćirić Communicated

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP Dedicated to Professor Gheorghe Bucur on the occasion of his 7th birthday ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP EMIL POPESCU Starting from the usual Cauchy problem, we give

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

MATH 819 FALL We considered solutions of this equation on the domain Ū, where

MATH 819 FALL We considered solutions of this equation on the domain Ū, where MATH 89 FALL. The D linear wave equation weak solutions We have considered the initial value problem for the wave equation in one space dimension: (a) (b) (c) u tt u xx = f(x, t) u(x, ) = g(x), u t (x,

More information

Stein s method and weak convergence on Wiener space

Stein s method and weak convergence on Wiener space Stein s method and weak convergence on Wiener space Giovanni PECCATI (LSTA Paris VI) January 14, 2008 Main subject: two joint papers with I. Nourdin (Paris VI) Stein s method on Wiener chaos (ArXiv, December

More information

Modelling energy forward prices Representation of ambit fields

Modelling energy forward prices Representation of ambit fields Modelling energy forward prices Representation of ambit fields Fred Espen Benth Department of Mathematics University of Oslo, Norway In collaboration with Heidar Eyjolfsson, Barbara Rüdiger and Andre Süss

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES

GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES GAUSSIAN PROCESSES AND THE LOCAL TIMES OF SYMMETRIC LÉVY PROCESSES MICHAEL B. MARCUS AND JAY ROSEN CITY COLLEGE AND COLLEGE OF STATEN ISLAND Abstract. We give a relatively simple proof of the necessary

More information

SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS

SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS C O L L O Q U I U M M A T H E M A T I C U M VOL. LXIII 1992 FASC. 2 SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS BY JACEK D Z I U B A Ń S K I (WROC

More information

Folland: Real Analysis, Chapter 8 Sébastien Picard

Folland: Real Analysis, Chapter 8 Sébastien Picard Folland: Real Analysis, Chapter 8 Sébastien Picard Problem 8.3 Let η(t) = e /t for t >, η(t) = for t. a. For k N and t >, η (k) (t) = P k (/t)e /t where P k is a polynomial of degree 2k. b. η (k) () exists

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information