Linear SPDEs driven by stationary random distributions

Size: px
Start display at page:

Download "Linear SPDEs driven by stationary random distributions"

Transcription

1 Linear SPDEs driven by stationary random distributions aluca M. Balan June 5, 22 Abstract Using tools from the theory of stationary random distributions developed in [9] and [35], we introduce a new class of processes which can be used as a model for the noise perturbing an SPDE. This type of noise is not necessarily Gaussian, but it includes the spatially homogeneous Gaussian noise introduced in [9], and the fractional noise considered in [3]. We derive some general conditions for the existence of a random field solution of a linear SPDE with this type of noise, under some mild conditions imposed on the Green function of the differential operator which appears in this equation. This methodology is applied to the study of the heat and wave equations (possibly replacing the Laplacian by one of its fractional powers), extending in this manner the results of [3] to the case H < /2. MSC 2 subject classification: Primary 6H5; secondary 6H5, 6G6 Keywords and phrases: stationary random distributions, stochastic partial differential equations, stochastic integral, processes with stationary increments, fractional Brownian motion Introduction The theory of random processes, or random fields, with stationary increments was developed in the 95 s, the landmark references being [4] and [35]. These processes have many interesting properties, most of which can be derived from their spectral representation as an integral with respect to a complex random measure. In particular, various classes of self-similar processes with stationary increments, also called H-sssi processes (H being the index of self-similarity), received special attention. These include the Hermite processes, which arise as limits in the non-central limit theorems (see [3]) and the generalized grey Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Avenue, Ottawa, ON, KN 6N5, Canada. address: rbalan@uottawa.ca esearch supported by a grant from the Natural Sciences and Engineering esearch Council of Canada.

2 Brownian motions (see [3]). The concept of operator scaling random field (OSF) has been recently introduced as a replacement for self-similarity in the case of anisotropic fields. An explicit form of the spectral density of a Gaussian OSF with stationary increments was obtained in [8]. An extension to the multivariate case was studied in [26]. The fractional Brownian motion (fbm) is the most commonly known process with stationary increments. There are two generalizations of the fbm in higher dimensions: the fractional Brownian field (fbf), and the fractional Brownian sheet (fbs). The fbf has stationary increments, being the only isotropic Gaussian H-sssi process (see [27], [7]). The fbs was introduced in [22], and was studied by many authors (see [25], [4]). It does not have stationary increments, but can be studied using spectral analysis due to its representation as an integral with respect to a complex Gaussian measure. In the recent years, the fbs has been used increasingly as a model for the noise in stochastic analysis (see [7], [8]). The goal of this article is to introduce the tools necessary for the study of stochastic partial differential equations (SPDEs) driven by a noise which has a harmonic structure similar to that of a random field with stationary increments, in both space and time. For this, an important step is to define a stochastic integral with respect to the noise. In dimension d =, the definition of a stochastic (or Wiener) integral with respect to a process (X t ) t with stationary increments is usually taken for granted in the literature. In the case of the fbm, this problem was studied thoroughly in [3] and [32], but these references did not explore the fact that the space of Wiener integrands may contain distributions. Implicitly used by several authors, this fact has been recently proved in [2] (in the case of the fbm) and [2] (in the case of an arbitrary process with stationary increments). The fact that the space of Wiener integrands may contain distributions becomes important when one tries to extend these results to higher dimensions, and use the space-time process (X t,x ) t,x d as a model for the noise perturbing an SPDE, whose corresponding Green function could be a distribution in x. It is known that any process (X t ) t d which has stationary increments and satisfies the condition X = admits the spectral representation X t = (e iτ t ) M(dτ), () d where τ t = d i= τ it i and M is a complex random measure with orthogonal increments. When d =, the Fourier transform of the indicator function [,t] is F [,t] (τ) = (e iτt )/( iτ) and hence X t = F d [,t] (τ)m(dτ), with M(dτ) = iτ M(dτ). Therefore, a natural space of Wiener integrands with respect to (X t ) t is the Hilbert space H defined as the completion of the set S of linear combinations of indicator functions with respect to the inner product [,t], [,s] H = F [,t] (τ)f [,s] (τ)µ(dτ), where E M(A) 2 = µ(a). The map [,t] X t is an isometry from H to L 2 (Ω). When d = 2, F [,t] (τ)( iτ )( iτ 2 ) = (e iτt )(e iτ2t2 ) = 2

3 (e iτ t ) (e iτt ) (e iτ2t2 ), and hence, using (), we see that X [,t] := X t,t 2 X t, X,t2 = F [,t] (τ)m(dτ), d with M(dτ) = ( iτ )( iτ 2 ) M(dτ). This shows that in dimensions d 2, one has the same definition of the space H of Wiener integrands, except that the isometry is the map [,t] X [,t]. If the space H is large enough, then the process (X t ) t d could be used for modeling the noise perturbing an SPDE in d. This is the case for instance, if H contains the class U of Schwartz distributions on d whose Fourier transforms belong to L 2 C (d, µ). When d =, there are approximation results which guarantee that H contains U (see e.g. Theorem 3.2 of [2]). Such approximation results may be generalized to higher dimensions d, but we do not explore this problem here. In the present article we assume that the space-time noise is given by a collection {X(ϕ); ϕ D C ( d+ )} of random variables defined by: X(ϕ) = FϕdM, ϕ D C ( d+ ) (2) d+ where Fϕ denotes the Fourier transform of ϕ, M is a complex random measure on d+ with orthogonal increments, and D C ( d+ ) is the class of complexvalued infinitely differentiable functions on d+ with compact support. We say that {X(ϕ); ϕ D C ( d+ )} is a stationary random distribution, due to Itô s work [9], who was the first to consider random variables of this form, indexed by functions ϕ D C (). Itô s motivation for this terminology was due to the fact that such an object can be viewed as a generalized stationary process (see Section 3 below). We note in passing that the noise (2) is not related to the concept of harmonizable process introduced by Loève in [28], and developed further in [6] and [33], as another extension of the notion of stationary process. The noise (2) gives rise to a random field {X(t, x)} (t,x) d+ which has a similar representation to the spectral representation of a process with stationary increments in d+ (see relation (25) below). The covariance of the noise is given by E[X(ϕ)X(ψ)] = Fϕ(τ, ξ)fψ(τ, ξ)π(dτ, dξ), d+ where Π is the control measure of M and the integration variables are τ and ξ d. This definition produces a rich enough space H, and induces a structure which is amenable to spectral analysis, being similar to the structure of a random field with stationary increments on d+. This framework includes the Gaussian noise considered by other references (e.g. [], [2], [3], [9], [], [2]), without being restricted to the Gaussian case. In particular, it provides a unifying framework for the study of equations with fractional noise in time, which covers simultaneously the cases H > /2 and H < /2. An interesting case arises when Π = ν µ, where µ is a tempered measure on d, and ν(dτ) = τ 2 ν(dτ) is the spectral measure of a process (Z t ) t with 3

4 stationary increments and covariance function. In this case, the noise can be identified with a collection {X t (ϕ); t >, ϕ D C ( d )} of random variables which admit the representation mentioned above and have covariance: E[X t (ϕ)x s (ψ)] = (t, s)j(ϕ, ψ), where J(ϕ, ψ) = Fϕ(ξ)Fψ(ξ)µ(dξ). d In the Gaussian case, the process {X t (ϕ); t, ϕ D( d )} was considered as a model for the noise driving an SPDE by many references (see [9], [], [], [2], [5] for a sample of references in the case when Z is a Brownian motion, and [2], [3], [] for the case when Z is a fbm of index H > /2). These references considered only the case when the Fourier transform of µ in S ( d ) is a locally integrable function f (or a measure Γ), and hence J can be represented as: J(ϕ, ψ) = ϕ(x)ψ(y)f(x y)dxdy, ϕ, ψ S( d ). (3) d d In general, the Fourier transform of µ in S ( d ) may not be a locally integrable function (or a measure). This is the case for instance, when d = and µ(dξ) = ξ α dξ with α (, ). In the present article, we do not need representation (3). Moreover, we do not assume that the noise X is Gaussian. This article is organized as follows. In Section 2 we review some background material related to random fields with stationary increments. In Section 3, we introduce the precise definition of the noise {X(ϕ); ϕ D C ( d+ )}, and we identify several classes of deterministic integrands with respect to this noise. In Section 4, we define the random-field solution of a linear SPDE Lu = Ẋ driven by the noise X, and we show that, under some mild regularity conditions imposed on the fundamental solution G of the equation Lu =, the necessary and sufficient condition for the existence of this solution is: t 2 e iτs FG(s, )(ξ)ds Π(dτ, dξ) <, (4) d+ where FG(s, ) denotes the Fourier transform of the distribution G(s, ) S ( d ). Section 5 and Section 6 are dedicated, respectively, to the parabolic case L = 2 t L, and the hyperbolic case L = t L, where L = ( ) β/2 for 2 some β >, assuming that Π = ν µ (with measures ν and µ as above). In the parabolic case, we show that condition (4) is equivalent to: τ 2 + Ψ(ξ) 2 ν(dτ)µ(dξ) <, (5) + d whereas in the hyperbolic case, condition (4) is equivalent to: Ψ(ξ) + τ 2 ν(dτ)µ(dξ) <, (6) + Ψ(ξ) + d 4

5 with Ψ(ξ) = ξ β in both cases. When (Z t ) t is the fbm of index H (, ), conditions (5) and (6) become d ( ) 2H µ(dξ) <, respectively + Ψ(ξ) d ( ) H+/2 µ(dξ) <, + Ψ(ξ) extending the results of [3] to the case H < /2. The appendix collects some auxiliary results, which are invoked in the article. In Appendix A, we review the construction of the stochastic integral with respect to a complex random measure and list some of its properties. Appendix B contains some elementary estimates. Appendix C shows that some technical conditions (which are imposed for treating the hyperbolic equation) are verified in the case of two particular examples of measures ν. We conclude the introduction with a few words about the notation. We let D C ( d ) be the set of complex-valued infinitely differentiable functions on d with compact support. For any p >, we denote by L p C (d, µ) the space of complex-valued functions ϕ on d such that ϕ p is integrable with respect to the measure µ. When µ is the Lebesgue measure, we simply write L p C (d ). We denote by L p C (Ω) the set of complex-valued random variables X, defined on a probability space (Ω, F, P ), such that E X p <. We let S C ( d ) be the set of complex-valued rapidly decreasing infinitely differentiable functions on d. We denote by D C (d ) and S C (d ) the space of all complex-valued linear functionals defined on D C ( d ), respectively S C ( d ). Similar notations are used for spaces of real-valued elements, with the subscript C omitted. We denote by Fϕ(ξ) = e iξ x ϕ(x)dx the Fourier transform of a function d ϕ L ( d ). (We use the same notation F for the Fourier transform of functions on, d or d+. Whenever there is a risk of confusion, the notation will be clearly specified.) We denote by x y = d i= x iy i the inner product in d and by x = (x x) /2 the Euclidean norm in d. A bounded rectangle in d is a set of the form (x, y] = {z d ; x i < z i y i for all i =,..., d}. We denote by d the class of bounded rectangles in d. We let B( d ) be the class of all Borel sets in d and B b ( d ) be the class of all bounded Borel sets in d. 2 Background In this section, we review the basic elements of the theory of random processes and random fields with stationary increments. We refer the reader to Sections 23 and 25.2 of [36] for more details. Definition 2. A real-valued random field X = (X t ) t d, defined on a probability space (Ω, F, P ), has (wide-sense) stationary increments, if E(X 2 t ) < for any t d, and the increments of the process have homogenous first and second order moments, i.e. E(X t+s X s ) = m(t) for any t, s d, and E X t+s X s 2 = V (t) for any t, s d. (7) 5

6 In particular, a stationary random field has stationary increments. In the present article, we work only with zero-mean processes, i.e. we assume that m(t) = for all t d. From (7), it follows that for any s, t, u d, E[(X u+t X u )(X u+s X u )] = [V (t) + V (s) V (t s)] =: (t, s). 2 (To see this, write (X u+t X u )(X u+s X u ) = 2 ( X u+t X u 2 + X u+s X u 2 X u+t X u+s 2 ).) On the other hand, the covariance between arbitrary increments X u+t X u and X v+s X v depends on t, s and u v: E[(X u+t X u )(X v+s X v )] = (t, s + v u) (t, v u). This follows by writing X v+s X v = (X v+s X u ) (X v X u ). The function is called the covariance function of X and plays an important role in the analysis of the process X, via the spectral theory. More precisely, if is continuous, then the process X and its covariance function admit the following spectral representations: (see (4.35) and (4.345) of [36]) X t = X + t Y + (e iτ t ) M(dτ) (8) d \{} (t, s) = Σt s + (e iτ t )(e iτ s ) µ(dτ). d \{} The elements which appear in these representations are the following: M denotes a collection { M(A); A d } of zero-mean random variables in L 2 C (Ω), indexed by the class d of rectangles in d which do not include, which satisfy the following conditions: (i) (additivity) M(A B) = M(A)+ M(B) a.s. for disjoint sets A, B d ; (ii) (orthogonality) E[ M(A) M(B)] = for any disjoint sets A, B d ; (iii) (control measure) E M(A) 2 = µ(a) for any A d ; (iv) (symmetry) M( A) = M(A) a.s. for any A d. µ is a symmetric measure on d \{} which satisfies the condition: d \{} τ 2 µ(dτ) <, (9) + τ 2 or equivalently, µ is a Lévy measure, i.e. d \{} ( τ 2 ) µ(dτ) <. µ is called the spectral measure of X and the control measure of M. Y is a real-valued centered d-dimensional random vector with covariance matrix Σ, which is orthogonal to M(A) for all A d, i.e. E[Y M(A)] = for all A d. We may define M({}) = Y and µ({}) = E Y 2. 6

7 Condition (9) ensures that the function ϕ(τ) = e iτ t is in L 2 C (d, µ), and hence the integral of (8) is well-defined. In general, one can consider an object M as above, except that its indexing class consists of all rectangles (which may include ). More precisely, we have the following definition. Definition 2.2 Let µ be a measure on d. A collection M = { M(A); A d } of zero-mean square-integrable C-valued random variables which satisfy conditions (i)-(iii) above for any sets A, B d is called a complex random measure indexed by rectangles, with control measure µ. The stochastic integral with respect to M is defined as the mean-square limit of a sequence of iemann-stieltjes integrals. More precisely, for any continuous function ϕ L 2 C (d, µ), we define the following C-valued random variable: d ϕ(τ) M(dτ) = lim { lim k n n max j A j j= ϕ(τ j ) M(A j )} in L 2 C(Ω), where (A j ) j kn d is a partition of [ N n, N n ] with N n = (n,..., n) d, A j is the length of the longest edge of A j, and τ j A j is arbitrary. For the sake of completeness, the details of this construction are given in Appendix A. If M is symmetric, then µ is symmetric and d ϕd M is real-valued for any continuous function ϕ L 2 C (d, µ) such that ϕ(τ) = ϕ( τ) for all τ d. We now examine the case d =. It is useful to consider this particular case separately since the noise perturbing the equation will have the temporal structure of a process (Z t ) t as described below. Let Z = (Z t ) t be a real-valued centered random process with stationary increments and spectral representation (8) with Y = and spectral measure ν. We assume that Z =. Let ν(dτ) = τ 2 ν(dτ). Note that ν is a symmetric measure on which satisfies the condition: K := ν(dτ) <. () + τ 2 (The measure ν is sometimes also called the spectral measure of Z.) Since F [,t] (τ) = t e iτs ds = (e iτt )/( iτ), it follows that: (t, s) = E(Z t Z s ) = F [,t] (τ)f [,s] (τ)ν(dτ). () Note that relation () does not hold in dimensions d 2. Let S be the set of elementary functions on, i.e. real linear combinations of indicator functions [,t], t. We endow S with the inner product: [,t], [,s] Λ = (t, s). Let Λ be the completion of S with respect to, Λ. The map [,t] Z t is an isometry between S and L 2 (Ω), which is extended to Λ. This extension is 7

8 denoted by ϕ Z(ϕ) = ϕ(t)dz t. The space Λ contains distributions. More precisely, if ϕ S () is such that Fϕ is a function and Fϕ(τ) 2 ν(dτ) <, then ϕ Λ (see Theorem 3.2 of [2]). emark 2.3 By Theorem 2.3 of [2] and Proposition 2.5 of [2], Λ coincides with the completion of D() with respect to the inner product: ϕ, ψ Λ = Fϕ(τ)Fψ(τ)ν(dτ). In general, the Fourier transform in S () of the tempered measure ν is a distribution (denoted by Fν), which coincides with (/2)V, V being the second order distributional derivative of the continuous variance function V (see Section 2.2 of [2]). In some cases, Fν is a locally integrable function ρ. By the Fourier inversion theorem on S(), this is equivalent to saying that ν is the Fourier transform of (2π) ρ in S (). In this case, (see Lemma 5.6 of [24]) (t, s) = t s ρ(u v)dudv. (2) elation (2) is useful in applications, but is not needed in the present article. When ν is finite, ρ(t) = e iτt ν(dτ), and a process (Z t ) t with stationary increments and covariance given by (2) can be represented as Z t = t Y sds, where (Y t ) t is a zero-mean stationary process with E(Y Y t ) = ρ(t). Example 2.4 Let ν(dτ) = τ γ dτ for some γ (, ). Then ν is the Fourier transform of a locally integrable function if and only if γ (, ) (see Lemma 3. of [2]). This function is the iesz kernel ρ(t) = c γ t ( γ) (see Lemma, Chapter V of [34]). In the Gaussian case, the parametrization γ = 2H with H (, ) corresponds to the case when Z is a fbm of index H. Example 2.5 Let ν(dτ) = ( + τ 2 ) γ/2 dτ for some γ >. When γ >, ν is the Fourier transform of the Bessel kernel (see Proposition 2, Chapter V of [34]) ρ(t) = c γ w (γ )/2 e w e t2 /(4w) dw. The measure ν is finite if and only if γ >. If γ = 2, then ρ(t) = c exp( c 2 t ) for some constants c, c 2 > (see Exercise 2.3.4, Chapter 8 of [23]), and a process (Z t ) t with stationary increments and spectral measure ν(dτ) = τ 2 ( + τ 2 ) dτ can be represented as Z t = t Y sds, where (Y t ) t is a zeromean stationary process with E(Y Y t ) = ρ(t). In the Gaussian case, (Y t ) t is a zero-mean Ornstein-Uhlenbeck process. 3 Stationary andom Distributions In this section, we describe the noise perturbing the equation using the concept of stationary random distribution. We begin with several definitions. 8

9 Following [9] (see also Chapter III of [6]), we introduce the concept of random distribution (also called generalized random field ). Definition 3. A collection {X(ϕ); ϕ D C ( d )} of zero-mean square-integrable C-valued random variables is called a random distribution if the map ϕ X(ϕ) is linear and continuous from D C ( d ) to L 2 C (Ω), i.e. (i) for any a, b C and ϕ, ψ D C ( d ), E X(aϕ + bψ) ax(ϕ) bx(ψ) 2 = ; (ii) if ϕ n ϕ in D C ( d ), then E X(ϕ n ) X(ϕ) 2. A random distribution {X(ϕ); ϕ D C ( d )} is called real if X(ϕ) = X(ϕ) for all ϕ D C ( d ). For a real random distribution, X(ϕ) is real-valued for any ϕ D( d ). The following definition extends to higher dimensions a concept introduced by Itô in [9] in the case d =. In higher dimensions, this object was studied by Yaglom in [35], who called it a homogeneous generalized random field. Definition 3.2 A random distribution {X(ϕ); ϕ D C ( d )} is called (weakly) stationary (or simply stationary) if for any ϕ, ψ D C ( d ) and for any h d, E[X(τ h ϕ)x(τ h ψ)] = E[X(ϕ)X(ψ)], where (τ h ϕ)(t) = ϕ(t + h) for all t d. emark 3.3 Any zero-mean square-integrable C-valued process {X(t)} t d which is L 2 C (Ω)-continuous can be identified with a random distribution defined by: X(ϕ) = X(t)ϕ(t)dt, ϕ D C ( d ). d If the process {X(t)} t d is (weakly) stationary (i.e. E[X(t + h)x(s + h)] = E[X(t)X(s)] for any t, s, h d ), then {X(ϕ); ϕ D C ( d )} is stationary. If ρ(t) = E[X(t)X()], t d is the covariance function of {X(t)} t d, then E[X(ϕ)X(ψ)] = ρ(t)(ϕ ψ)(t)dt = ρ(ϕ ψ), d where ψ(x) = ψ( x), and ρ denotes also the distribution induced by ρ. By Theorem of [35] (or Theorem 2. of [9] when d = ), for any stationary random distribution {X(ϕ); ϕ D C ( d )}, there exists a unique distribution ρ D C (d ) such that: E[X(ϕ)X(ψ)] = ρ(ϕ ψ), for all ϕ, ψ D C ( d ). We say that ρ is the covariance distribution of {X(ϕ); ϕ D C ( d )}. 9

10 Note that ρ is non-negative definite, since ρ(ϕ ϕ) = E X(ϕ) 2 for all ϕ D C ( d ). By the Bochner-Schwartz theorem, there exists a tempered measure µ on d such that ρ = Fµ in S C (d ), i.e. ρ(ϕ) = Fϕdµ, for all ϕ S C ( d ). d We say that µ is the spectral measure of {X(ϕ); ϕ D C ( d )}. ecall that the measure µ is tempered if d ( ) k + τ 2 µ(dτ) < for some k >. (3) If the random distribution {X(ϕ); ϕ D C ( d+ )} is real, then µ is symmetric, i.e. µ( A) = µ(a) for any A B( d ). emark 3.4 If {X(t)} t d is a stationary zero-mean C-valued process with covariance function ρ, then there exists a finite measure µ (called the spectral measure of {X(t)} t d), such that ρ(t) = e iτ t µ(dτ) for all t d. The d process {X(t)} t d admits the spectral representation: (see Section 2 of [36]) X(t) = e iτ t M(dτ), d where M = {M(A); A d } is a complex random measure indexed by rectangles, with control measure µ, as given by Definition 2.2. Therefore, the stationary random distribution {X(ϕ); ϕ D C ( d )} can be represented as: X(ϕ) = X(t)ϕ(t)dt = d Fϕ(τ)M(dτ). d In what follows, we will see that this representation continues to hold for any stationary random distribution. For this, we need to introduce a more general concept of complex random measure (indexed by Borel sets). Definition 3.5 Let µ be a measure on d and B µ = {A B( d ); µ(a) < }. A collection M = {M(A); A B µ } of zero-mean square-integrable C-valued random variables which satisfy the condition: E[M(A)M(B)] = µ(a B), for all A, B B µ is called a complex random measure with control measure µ. We say that M is symmetric if for any A B µ, M( A) = M(A) a.s. Note that M is countably additive, in the sense that for any (A n ) n B µ with n A n B µ, we have M( n A n ) = n M(A n) a.s. The stochastic integral of a simple function is defined using d A dm = M(A) for any A B µ. For a function ϕ L 2 C (d, µ), the stochastic integral

11 M(ϕ) = ϕdm is defined in L 2 d C (Ω) using an approximation by a sequence of simple functions. For any ϕ, ψ L 2 C (d, µ), we have: E[M(ϕ)M(ψ)] = ϕ(τ)ψ(τ)µ(dτ). d If M is symmetric, then M(ϕ) is real-valued for any ϕ L 2 C (d, µ) which satisfies the condition: ϕ(τ) = ϕ( τ) for all τ d. By Theorem 3 of [35] (or Theorem 4. of [9] in the case d = ), for any stationary random distribution {X(ϕ); ϕ D C ( d )} with spectral measure µ, there exists a unique complex random measure M with control measure µ, such that X(ϕ) = FϕM, for any ϕ D C ( d ). (4) d M is called the spectral random measure of {X(ϕ); ϕ D C ( d )}. If the random distribution {X(ϕ); ϕ D C ( d )} is real, then M is symmetric. emark 3.6 The right-hand side of (4) is well-defined for any ϕ S C ( d ). To see this, we use the fact that Fϕ S C ( d ). Hence C ϕ,n := sup τ d( + τ 2 ) n Fϕ(τ) 2 < for any n >. Taking n = k and using (3), it follows that d Fϕ(τ) 2 µ(dτ) C ϕ,k d ( ) k µ(dτ) <. + τ 2 Hence Fϕ belongs to L 2 C (d, µ) and d FϕdM is well-defined in L 2 C (Ω). For the remaining part of this article, we let {X(ϕ); ϕ D C ( d+ )} be a real stationary random distribution on d+ with spectral random measure M and spectral measure Π. Hence, X(ϕ) = Fϕ(τ, ξ)m(dτ, dξ), for all ϕ D C ( d+ ), (5) d+ where the integration variables are τ and ξ d. Due to the symmetry of M, X(ϕ) is real-valued for any ϕ D( d+ ). For any ϕ, ψ D C ( d+ ), we have E[X(ϕ)X(ψ)] = Fϕ(τ, ξ)fψ(τ, ξ)π(dτ, dξ) =: I(ϕ, ψ). d+ emark 3.7 If the Fourier transform of Π in S ( d+ ) is a locally integrable function F : d+ [, ), then I(ϕ, ψ) = ϕ(t, x)ψ(s, y)f (t s, x y)dxdydtds, 2 2d for any ϕ, ψ D( d+ ). This representation is not needed in the present work.

12 We endow D C ( d+ ) with the inner product, H defined by: ϕ, ψ H = I(ϕ, ψ). and we let H be the completion of D C ( d+ ) with respect to this inner product. The map ϕ X(ϕ) is an isometry from D C ( d+ ) to L 2 C (Ω), which is extended to H. The Hilbert space H contain distributions. By abuse of notation, we write X(ϕ) = ϕ(t, x)x(dt, dx) d even if ϕ is a distribution. This stochastic integral is well-defined only if ϕ H. We will need an alternative representation for the inner product in H. For any ϕ, ϕ 2 D C ( d+ ), if we denote by φ (i) ξ (t) = Fϕ i(t, )(ξ) the Fourier transform of the function ϕ i (t, ), and by Fφ (i) ξ the Fourier transform of the function t φ (i) ξ (t) for i =, 2, then ϕ, ϕ 2 H = Fφ () ξ (τ)fφ(2) ξ (τ)π(dτ, dξ) =: ϕ, ϕ 2. (6) d+ The following result is a modified version of Lemma 3.7 of [2]. Lemma 3.8 Let µ be a tempered measure on d. For every A B b ( d ), there exists a sequence (ψ n ) n D C ( d ) such that d A (ξ) Fψ n (ξ) 2 µ(dξ). Proof: Since A is a bounded set, there exists a sequence (φ n ) n D( d ) such that φ n A, supp φ n K for all n and sup ξ d φ n (ξ) C for all n, where C > and K is a compact set in d. By the dominated convergence theorem, d φ n (ξ) A (ξ) 2 µ(dξ). For the second part of the proof, we argue as on p. 63 of [2]. Since the Fourier transform is a homeomorphism from S C ( d ) onto itself, and D C ( d ) is dense in S C ( d ), F(D C ( d )) is dense in S C ( d ). Since µ is a tempered measure, the convergence in S C ( d ) implies the convergence in L 2 C (d, µ). Hence, there exists a sequence (ψ n ) n D C ( d ) such that: d φ n (ξ) Fψ n (ξ) 2 µ(dξ). The conclusion follows. Next, we give a criterion for checking that ϕ H, which is a generalization of Theorem 2. of [3]. 2

13 Theorem 3.9 Let t ϕ(t, ) S ( d ) be a deterministic function such that Fϕ(t, ) is a function for all t. Suppose that: (i) the function (t, ξ) Fϕ(t, )(ξ) =: φ ξ (t) is measurable on d ; (ii) for all ξ d, φ ξ(t) dt <. Then the function (τ, ξ) Fφ ξ (τ) is measurable on d+, where Fφ ξ denotes the Fourier transform of φ ξ, i.e. Fφ ξ (τ) = e iτt φ ξ (t)dt. If ϕ 2 := Fφ ξ (τ) 2 Π(dτ, dξ) <, (7) d+ then ϕ H and ϕ 2 H = ϕ 2. In particular, the stochastic integral of ϕ with respect to X is well-defined and is given by: X(ϕ) = Fφ ξ (τ)m(dτ, dξ). (8) d+ Proof: The argument is a simplified version of the proof of Theorem 2. of [3]. We denote a(τ, ξ) = Fφ ξ (τ). We first prove that: the function (τ, ξ) a(τ, ξ) is measurable. (9) Since (t, ξ) φ ξ (t) is measurable, by applying Theorem 3.5 of [5] to the real and imaginary part of φ ξ (t) we conclude that there exists a sequence (l n ) n of complex-valued simple functions such that l n (t, ξ) φ ξ (t) and l n (t, ξ) φ ξ (t) for any n. We let a ln (τ, ξ) = Fl n (, ξ)(τ). By the dominated convergence theorem, a ln (τ, ξ) a(τ, ξ) = e iτt (l n (t, ξ)dt φ ξ (t))dt. If l(t, ξ) = (c,d] (t) A (ξ) is an elementary function, then a l (τ, ξ) = Fl(, ξ)(τ) = F (c,d] (τ) A (ξ) is clearly measurable. Since a ln (τ, ξ) is measurable for any n, it follows that a(τ, ξ) is measurable. Due to (6) and the definition of H, we have to show that for any ε > there exists l = l ε D C ( d+ ) such that ϕ l 2 = Fφ ξ (τ) Fψ ξ (τ) 2 Π(dτ, dξ) < ε 2, (2) d+ where Fψ ξ denotes the Fourier transform of the function t ψ ξ (t) = Fl(t, )(ξ). Due to (7) and (9), it follows that a L 2 C (d+, Π). By applying Theorem 9.2 of [5] to the real and imaginary parts of the function a, we infer that there exists a simple function h on d+ such that d+ a(τ, ξ) h(τ, ξ) 2 Π(dτ, dξ) < ε2 4. (2) Here, a simple function is a complex linear combination of indicator functions of the form B, where B is a Borel subset of d+ which is not necessarily 3

14 bounded. From the proof of Theorem 9.2 of [5], we see that the Borel sets B which appear in the definition of the function h satisfy Π(B) <. The measure Π is locally finite, and hence σ-finite. By applying Theorem.4.(ii) of [5], each of these Borel sets can be approximated by a finite union of bounded rectangles in d+. Therefore, we may assume that each set B is bounded. By Lemma 3.8, there exists a function l D C ( d+ ) such that d+ B (τ, ξ) Fl(τ, ξ) 2 Π(dτ, dξ) < ε2 4. (22) elation (2) follows from (2) and (22), since Fl(τ, ξ) = Fψ ξ (τ). To prove (8), we denote by Y the random variable on the right hand side of (8), which is well-defined due to (7). By definition, X(ϕ) = lim n X(ϕ n ) in L 2 C (Ω), where (ϕ n) D C ( d+ ) is such that ϕ n ϕ. Using (5), we have: E X(ϕ n ) Y 2 = 2 E [Fϕ n (τ, ξ) Fφ ξ (τ)]m(dτ, dξ) d+ = Fϕ n (τ, ξ) Fφ ξ (τ) 2 Π(dτ, dξ) = ϕ n ϕ 2, d+ i.e. Y = lim n X(ϕ n ) in L 2 C (Ω). Hence, Y = X(ϕ) a.s. The next result gives a criterion for checking if the indicator function of a bounded Borel set A (in particular, a bounded rectangle) is in H, and gives a representation for the stochastic integral of A. Theorem 3. (i) If A B b ( d+ ) is such that d+ F A (τ, ξ) 2 Π(dτ, dξ) <, (23) then A H and the stochastic integral of A with respect to X is given by: X(A) := X( A ) = F A (τ, ξ)m(dτ, dξ) d+ (ii) Suppose that the measure Π satisfies the following condition: d+ (τ 2 + ξ 2 ) τ 2 ξ 2... ξ2 d Π(dτ, dξ) <. (24) Then (23) holds for any finite union A of bounded rectangles in d+. In particular, the stochastic integral of (,t] (,x] with respect to X is given by: e iτt X(t, x) := X( (,t] (,x] ) = iτ d+ d j= e iξjxj iξ j M(dτ, dξ). (25) 4

15 Proof: (i) Let ϕ n = A φ n be the mollification of A, where φ n (t, x) = n d+ φ(nt, nx), and φ D( d+ ) is such that φ L ( d+ ) =. Then ϕ n D( d+ ), Fϕ n F A = F A (Fφ n ) and Fφ n 2. By the dominated convergence theorem, d+ Fϕ n (τ, ξ) F A (τ, ξ) 2 Π(dτ, dξ). This proves that A H. The second statement follows by an approximation argument, as in the last part of the proof of Theorem 3.9. (ii) By linearity, it is enough to consider the case when A = (, t] (, x] with t, x d. On the region = {(τ, ξ) d+ ; τ 2 + ξ 2 }, we use the fact that F A A, where A is the Lebesgue measure of A, and hence F A (τ, ξ) 2 Π(dτ, dξ) A 2 Π(dτ, dξ) <. On the region c, we use the fact that F A (τ, ξ) = e iτt d e iξjxj iτ iξ j 2d+ τξ... ξ d, and hence, j= F A (τ, ξ) 2 Π(dτ, dξ) 2 d+ c τ 2 ξ 2 c... ξ2 d Π(dτ, dξ) <. There is a correspondence between the class of random fields defined by (25) and the class of random fields on d+ with stationary increments. To see this, note that (24) is equivalent to saying that Π(dτ, dξ) = τ 2 ξ 2... Π(dτ, dξ) ξ2 d is a Lévy measure. In this case, one can consider a random field {X (t, x)} (t,x) d+ with stationary increments and spectral representation: X (t, x) = (e iτt iξ x ) M(dτ, dξ), (26) d+ where the complex random measure M has control measure Π. This correspondence is illustrated by the next two examples. Example 3. The fractional Brownian sheet (fbs) with indices H, H,..., H d (, ) is a zero-mean Gaussian random field {X(t, x)} (t,x) d+ with covariance E[X(t, x)x(s, y)] = H (t, s) d Hj (x j, y j ), j= 5

16 where H (t, s) = ( t 2H + s 2H t s 2H )/2 is the covariance of the fbm of index H (, ). Since H (t, s) = c H (e iτt )(e iτs ) τ (2H+) dτ, with c H = Γ(2H + ) sin(πh)/(2π), the fbs has the representation (25), where M is Gaussian and Π(dτ, dξ) = c H τ (2H ) d j= (c H j ξ j (2Hj ) )dτdξ. The measure Π satisfies (24) if and only if H + d j= H j <. In this case, there is an associated random field with stationary increments defined by (26), which is also called a fractional Brownian sheet by some authors (e.g. [8]). Example 3.2 A fractional Brownian field (fbf) on d+ with index H (, ) is a zero-mean Gaussian random field {X (t, x)} (t,x) d+ with covariance E[X (t, x)x (s, y)] = 2 [(t2 + x 2 ) H + (s 2 + y 2 ) H ( t s 2 + x y 2 ) H ]. This random field has stationary increments and spectral representation (26), where M is Gaussian and Π(dτ, dξ) = (τ 2 + ξ 2 ) (2H+d+)/2 dτdξ. The measure Π(dτ, dξ) = τ 2 d j= ξ2 Π(dτ, j dξ) satisfies condition (24) for any H (, ). To this process, one may associate a random field {X(t, x)} (t,x) d+ defined by (25), where M is a Gaussian random measure with control measure Π. An interesting case arises when Π = ν µ, (27) where ν is a symmetric measure on satisfying (), and µ is a symmetric tempered measure on d. Using the same mollification argument as in the proof of Theorem 3..(i), it follows that [,t] ϕ H for any t, ϕ D C ( d ). To see this, note that F( [,t] ϕ)(τ, ξ) 2 e iτt 2 Π(dτ, dξ) = d+ τ 2 ν(dτ) Fϕ(ξ) 2 µ(dξ) <. d For any t, s and ϕ, ψ D C ( d ), we have: [,t] ϕ, [,s] ψ H = (t, s)j(ϕ, ψ), where is given by (), being the covariance of a process Z = (Z t ) t with stationary increments and spectral measure ν(dτ) = τ 2 ν(dτ), and J(ϕ, ψ) = Fϕ(ξ)Fψ(ξ)µ(dξ). d In this case, H is the completion of E with respect to the inner product, H, where E is the set of complex linear combinations of functions of the form [,t] ϕ, with t and ϕ D C ( d ). To prove this, we use the fact that a function in 6

17 D C ( d+ ) can be approximated by a function of the form ϕ(t, x) = ϕ (t)ϕ 2 (x) with ϕ D C (), ϕ 2 D C ( d ), and a function in D C () can approximated in the norm Λ by a complex elementary function on (see emark 2.3). The stochastic integral of [,t] ϕ with respect to X is given by: e iτt X t (ϕ) := X( [,t] ϕ) = Fϕ(ξ)M(dτ, dξ). iτ d+ 4 Linear SPDEs Let {X(ϕ); ϕ D C ( d+ )} be a real stationary random distribution with spectral random measure M and spectral measure Π, as described in Section 3. We consider the equation: Lu(t, x) = Ẋ(t, x) t >, x d (28) with some deterministic initial conditions, where L is a second-order pseudodifferential operator with constant coefficients. Let w be the solution of the equation Lu = with the same initial conditions as (28). We assume that the fundamental solution G of the equation Lu = exists. The precise meaning of the concept of solution to equation (28) is given below. Consider first the equation Lu = F with zero initial conditions, where F is a smooth function and F = d+ t x... x d denotes its derivative. The solution of this equation is u(t, x) = w(t, x) + t G(t s, x y) F (s, y)dyds. d If F is replaced by Ẋ, one may define the concept of solution by replacing formally the integral Ẋ(s, y)dyds by the stochastic integral X(ds, dy). More precisely, we have the following definition. Definition 4. The process {u(t, x); t, x d } defined by u(t, x) = w(t, x) + t d G(t s, x y)x(ds, dy) (29) is called a random field solution of (28), provided that the stochastic integral on the right-hand side of (29) is well-defined, i.e. for any t > and x d g tx := [,t] G(t, x ) H. The following result gives some general conditions which ensure the existence of a random field solution, as a direct consequence of Theorem 3.9. Theorem 4.2 Let {X(ϕ); ϕ D C ( d+ )} be a real stationary random distribution with spectral random measure M and spectral measure Π. Suppose that 7

18 the fundamental solution G of Lu = exists and for any t >, G(t, ) S ( d ) and FG(t, ) is a function. Moreover, assume that (t, ξ) FG(t, )(ξ) =: H ξ (t) is measurable on + d, and t H ξ (s) ds <, for any t >, ξ d. (i) Equation (28) has a random field solution {u(t, x)} (t,x) d+ if and only if I t := F,t H ξ (τ) 2 Π(dτ, dξ) < for all t >, (3) d+ where F,t H ξ (τ) = t e iτs H ξ (s)ds, τ. In this case, E(u(t, x)) = w(t, x), Var(u(t, x)) = I t and the solution is given by: u(t, x) = w(t, x) + e iτt iξ x F,t H ξ (τ)m(dτ, dξ). d+ (ii) Assume (3) holds and let {u(t, x)} (t,x) d+ be the random field solution of (28) with zero initial conditions. Then, for any t, s > and x, y d, E[u(t, x)u(s, y)] = e iτ(t s) e iξ (x y) F,t H ξ (τ)f,s H ξ (τ)π(dτ, dξ). d+ In particular, for any t > fixed, E[u(t, x)u(t, y)] = e iξ (x y) µ t (dξ). d where µ t ( ) = Π t ( ) and Π t (dτ, dξ) = F,t H ξ (τ) 2 Π(dτ, dξ), i.e. {u(t, x)} x d is a zero-mean stationary random field on d with spectral measure µ t. Proof: (i) We apply Theorem 3.9 to ϕ = g tx. Note that the function φ ξ (s) := Fg tx (s, )(ξ) = [,t] (s)e iξ x FG(t s, )(ξ) = [,t] (s)e iξ x H ξ (t s) satisfies conditions (i) and (ii) of Theorem 3.9. Let Fφ ξ be the Fourier transform of the function s φ ξ (s). Then, t Fφ ξ (τ) = e iτs φ ξ (s)ds = e iξ x e iτs H ξ (t s)ds t = e iξ x e iτt e iτs H ξ (s)ds = e iξ x e iτt F,t H ξ (τ). (3) By definition u(t, x) = w(t, x) + X(g tx ). Hence, E(u(t, x)) = w(t, x) and Var(u(t, x)) = E X(g tx ) 2 = g tx 2 H = I t. (ii) Using the isometry property of the stochastic integral ϕ X(ϕ) and the fact that ϕ H = ϕ, where is defined by relation (7), we obtain: E[u(t, x)u(s, y)] = E[X(g tx )X(g sy )] = g tx, g sy H = g tx, g sy = Fφ ξ (τ)fψ ξ (τ)π(dτ, dξ), d+ where φ ξ (u) = Fg tx (u, )(ξ) and ψ ξ (u) = Fg sy (u, )(ξ). The conclusion follows using (3). 8

19 5 A parabolic equation In this section, we continue with the situation described in Section 4, assuming in addition that the measure Π satisfies (27), i.e. Π = ν µ, where ν is a symmetric measure on satisfying () and µ is a symmetric tempered measure on d. We consider the parabolic equation: u t (t, x) + ( )β/2 u(t, x) = Ẋ(t, x), t >, x d, (32) for some β >. In this case, H ξ (t) = FG(t, ) = exp{ tψ(ξ)}, where Ψ(ξ) = ξ β (see e.g. relation (3.7) of [29]). By Theorem 4.2, it suffices to check that (3) holds. For this, we write I t = N d t (ξ)µ(dξ), where N t (ξ) = F,t H ξ (τ) 2 ν(dτ). (33) We proceed to the calculation of F,t H ξ (τ). For this, note that if ϕ(x) = e ax with a, then F,t ϕ(τ) = t e (a+iτ)x dx = e (a+iτ)t a + iτ = e at cos(τt) + ie at sin(τt) a + iτ and F,t ϕ(τ) 2 = τ 2 + a 2 {sin2 (τt) + [e at cos(τt)] 2 }. (34) Applying (34) with a = Ψ(ξ), we obtain: where F,t H ξ (τ) 2 = τ 2 + Ψ(ξ) 2 {sin2 (τt) + [e tψ(ξ) cos(τt)] 2 }. (35) Using (33) and (35), we obtain: N t (ξ) = τ 2 + Ψ(ξ) 2 {sin2 (τt) + [e tψ(ξ) cos(τt)] 2 }ν(dτ). (36) Let ν t = ν h t A(t, a) := be the dilation of ν by t, where h t (τ) = τt. Then N t (ξ) = t 2 A(t, tψ(ξ)), (37) τ 2 + a 2 [sin2 τ + (e a cos τ) 2 ]ν t (dτ), a >. To identify the necessary and sufficient condition for the existence of a random-field solution, we need to obtain some suitable estimates for A(t, a). This goal is achieved by the following technical result. 9

20 Lemma 5. For any t > and a >, we have: C (2) t τ 2 + a 2 + t 2 ν t(dτ) A(t, a) C () t τ 2 + a 2 + t 2 ν t(dτ), where C () t and C (2) t are some positive constants depending on t. Proof: Note that ν t satisfies a condition similar to (). To simplify the writing, we assume that t =. (The argument for arbitrary t is similar.) Denote A(a) = A(, a). Note that ν = ν. Observe that for any τ and a >, sin 2 τ + (e a cos τ) 2 = ( e a ) 2 + 4e a sin 2 (τ/2). (38) Assume that a >. In this case, the right-hand side of (38) is bounded above by 5, and below by e. Since τ 2 + a 2 (τ 2 + a 2 + )/2, ( e ) τ 2 + a 2 ν(dτ) A(a) + τ 2 + a 2 + ν(dτ). Assume that a. Since τ 2 + < τ 2 + a 2 + 2(τ 2 + ), we have K 2 τ 2 + a 2 ν(dτ) K, + with the constant K given by (). Therefore, it suffices to show that K 2 A(a) K for some positive constants K and K 2. For the upper bound, we consider separately the integrals over the regions τ and τ >. We denote the two integrals by A (a), respectively A 2 (a). Using (34), we see that the integrand of A(a) is bounded above by. Therefore, A (a) On the other hand, A 2 (a) 5 τ > τ ν(dτ) 2 τ τ 2 + ν(dτ). τ 2 ν(dτ) + a2 τ > τ 2 + ν(dτ). Therefore, K = K. For the lower bound, we use the fact that τ 2 +a 2 τ 2 + and the right-hand side of (38) is bounded below by 4e sin 2 (τ/2). Choose b (, π 2 ) such that sin b = /2. Since the integrand is non-negative and sin is increasing on (, π 2 ), A(a) 4e π 2b π τ 2 + sin2 (τ/2)ν(dτ) e τ 2 + ν(dτ) =: K 2. 2b 2

21 emark 5.2 The constants C () t and C (2) t in Lemma 5. are given by: π/t C () t = 4 max{t 2, 5}, C (2) t = min{ e t, e t 2b/t τ 2 + ν(dτ)}. elation (37) and Lemma 5. lead to the following result. Theorem 5.3 Let {X(ϕ); ϕ D C ( d+ )} be a real stationary random distribution with spectral measure Π. Assume that Π = ν µ where ν is a symmetric measure on satisfying () and µ is a symmetric tempered measure on d. Then for any t > and ξ d, C (2) t τ 2 + Ψ(ξ) 2 + ν(dτ) N t(ξ) C () t τ 2 + Ψ(ξ) 2 + ν(dτ), where N t (ξ) is given by (36). Consequently, equation (32) has a random field solution if and only if condition (5) holds, i.e. τ 2 + Ψ(ξ) 2 ν(dτ)µ(dξ) <. + d emark 5.4 In the case of Example 2.4 and Example 2.5 with γ (, ), condition (5) becomes: (use Lemma B., Appendix B with a = + Ψ(ξ)) d ( ) +γ µ(dξ) <. (39) + Ψ(ξ) When applied to Example 2.4 with the parametrization γ = 2H, H (, ), Theorem 5.3 becomes an extension of Theorem 3.4 of [] to the case H < /2. emark 5.5 The results of this section are also valid for the equation u t (t, x) Lu(t, x) = Ẋ(t, x), t >, x d, where L is the L 2 ( d )-generator of a d-dimensional Lévy process (X t ) t whose characteristic exponent Ψ(ξ) is real-valued (see also [5]). Assuming that X t has density p t, we see that G(t, x) = p t ( x) and H ξ (t) = FG(t, )(ξ) = e iξ x p t (x)dx = E(e iξ Xt ) = exp{ tψ(ξ)}. d 6 A hyperbolic equation In this section, we consider again the situation described in Section 4, but for a different operator L. As in Section 5, we assume that the spectral measure Π of the real stationary random distribution {X(ϕ); ϕ D C ( d+ )} satisfies (27), i.e. Π = ν µ, where ν is a symmetric measure on satisfying () and µ is a symmetric tempered measure on d. 2

22 We consider the hyperbolic equation: 2 u t 2 (t, x) + ( )β/2 u(t, x) = Ẋ(t, x), t >, x d, (4) for some β >. In this case, H ξ (t) = FG(t, )(ξ) = sin(t Ψ(ξ)) Ψ(ξ) where Ψ(ξ) = ξ β (see (6) of [2]). Following the same steps as in the parabolic case, it suffices to find appropriate bounds for I t = d N t (ξ)µ(dξ), where N t (ξ) is given by (33). For the calculation of F,t H ξ (τ), we use the change of variable r = s Ψ(ξ): F,t H ξ (τ) = = = t e iτs sin(s Ψ(ξ))ds (4) Ψ(ξ) Ψ(ξ) t Ψ(ξ) e iτr/ Ψ(ξ) sin rdr ( Ψ(ξ) F,T sin ) τ, where T = t Ψ(ξ). Ψ(ξ) An elementary calculation shows that: (see the proof of Lemma B. of [3]) F,T sin(τ) 2 = (τ 2 ) 2 [f 2 T (τ) + g 2 T (τ)]. (42) where f T (τ) = sin(τt ) τ sin T and g T (τ) = cos(τt ) cos T. Hence, [sin(τt) τ sin(t Ψ(ξ))] 2 + [cos(τt) cos(t Ψ(ξ))] 2 F,t H ξ (τ) 2 Ψ(ξ) = (τ 2 Ψ(ξ)) 2. Using (33) and (43), we obtain: N t (ξ) = [sin(τt) We assume that τ sin(t Ψ(ξ))] 2 + [cos(τt) cos(t Ψ(ξ))] 2 Ψ(ξ) (43) (τ 2 Ψ(ξ)) 2 ν(dτ). ν(dτ) = η( τ )dτ, where the function η satisfies the following condition: (C) for any λ > there exists C λ > such that η(λτ) C λ η(τ) τ >. In addition, we assume that η satisfies either (C) or (C2) below: (44) (C) η is non-increasing on (, ), and for any K > there exists D K > such that a η(τ)dτ D Kaη(a) for any a K (C2) η is non-decreasing on (, ), and for any K > there exists D K > such that a τ 2 η(τ)dτ D K a η(a) for any a K. 22

23 emark 6. In Example 2.4, η(η) = τ γ and condition (C) holds for any γ (, ). In Example 2.5, η(η) = ( + τ 2 ) γ and condition (C) holds for any γ >. In these two examples, condition (C) holds if γ [, ), whereas condition (C2) holds if γ (, ). See Appendix C. As in Section 5, we let ν t = ν h t where A(t, a) = where h t (τ) = τt. Therefore, N t (ξ) = t 4 A(t, t Ψ(ξ)), (45) (τ 2 a 2 ) 2 [(sin τ τ a sin a)2 + (cos τ cos a) 2 ]ν t (dτ), a >. The following technical result gives some estimates for the integral A(t, a). Lemma 6.2 Assume that η satisfies (C). (a) If η is either non-increasing on (, ) or non-decreasing on (, ), then for any t > and a > A(t, a) C () t a2 + t 2 τ 2 + a 2 + t 2 ν t(dτ), where C () t is a positive constant depending on t. (b) If η satisfies either (C) or (C2), then for any t > and a >, A(t, a) C (2) t a2 + t 2 τ 2 + a 2 + t 2 ν t(dτ), where C (2) t is a positive constant depending on t. Proof: Note that ν t (dτ) = η t ( τ )dτ where the function η t (τ) = t η(τ/t) satisfies the same conditions as the function η. Therefore, we may assume that t =, the argument for arbitrary t being similar. Due to the symmetry of the integrand, we consider only the integral on (, ). We denote A(a) = (τ 2 a 2 ) 2 [ ( sin τ τ a sin a ) 2 + (cos τ cos a) 2 (a) We first prove that there exists a constant C > such that: A(a) C a ] η(τ)dτ. τ 2 η(τ)dτ for all a >. (46) + a2 To prove (46), we write A(a) as the sum of three integrals over the regions = (, a/2), 2 = (3a/2, ) and 3 = [a/2, 3a/2]. We denote the corresponding integrals by A (a), A 2 (a), A 3 (a), respectively. We use the inequality: ( sin τ τ a sin a ) 2 + (cos τ cos a) 2 2 a (τ + a)2. (47) 23

24 (Note that sin τ τ a sin a is bounded by + τ a and by 2τ. Hence (sin τ τ a sin a)2 is bounded by 2τ( + τ a ). Similarly, cos τ cos a is bounded by 2 and by τ + a, using the fact that cos x x. We get (cos τ cos a) 2 2(τ + a).) Due to (47), the integrand of A(a) is bounded above by 2a (τ a) 2. When τ, both (τ a) 2 and (τ + a) 2 behave as a 2 since a 2 (τ a) 2 4 a 2 and 4 9a 2 (τ + a) 2 a 2. When τ 2, both (τ a) 2 and (τ + a) 2 behave as τ 2, since τ 2 (τ a) 2 9 τ 2 and 9 25τ 2 (τ + a) 2 τ 2. Letting C () = 8 and C (2) = 5, we obtain: for i =, 2 A i (a) C (i) a i (τ + a) 2 η(τ)dτ C(i) a i τ 2 + a 2 η(τ)dτ. We now treat A 3 (a). Assume that η is non-increasing. (The argument for η non-decreasing is similar.) Using (42) and Plancherel s theorem, we have: A 3 (a) = 3a/2 a 2 F, sin(a )(τ) 2 η(τ)dτ η(a/2) a/2 = 2π η(a/2) a 2 sin 2 (as)ds 2π η(a/2) a 2. On the other hand, since τ 2 + a 2 3a 2 /4 for any τ [a/2, 3a/2], a 2 3a/2 a a/2 τ 2 + a 2 η(τ)dτ 4 η(3a/2) 3 a 2. F, sin(a )(τ) 2 dτ Using (C), we infer that η(a/2) Cη(3a/2) for some C > and hence, A 3 (a) C (3) a 3a/2 a/2 τ 2 + a 2 η(τ)dτ. for a constant C (3) >. This concludes the proof of (46), We now prove that there exists a constant C 2 > such that: A(a) C 2 a2 + τ 2 + a 2 η(τ)dτ, for all a >. (48) + Denote b = a 2 +. For any x >, let I(x) = τ 2 +x η(τ)dτ. 2 Assume first that a >. Then b 2a, and due to (46), A(a) C 2 b I(a). 24

25 Therefore, it suffices to prove that I(a) C 3 I(b) for some constant C 3 >. For x = a, b, we write I(x) = I (x)+i 2 (x) where I (x), I 2 (x) denote the integrals over the regions [, x], respectively (x, ). Clearly, I (a) a a 2 η(τ)dτ 2 b b 2 η(τ)dτ 4I (b). Using the fact that a b/ 2 and condition (C), we see that I 2 (a) b/ 2 τ 2 η(τ)dτ C b τ 2 η(τ)dτ CI 2(b). Hence, I(a) C 3 I(b) with C 3 = max{4, C}. This proves (48) in the case a >. Assume next that a. Then b 2, and since τ 2 + b 2 2(τ 2 + ), b I(b) 2 2 τ 2 + η(τ)dτ = 4 2 K, where K = τ 2 +η(τ)dτ. Therefore, it suffices to prove that there exists a constant K > such that A(a) K. To prove this, we write A(a) as a sum of two integrals over the regions [, 2] and ( 2, ). We denote the two integrals by A (a) and A (a), respectively. Note that, if we exclude η(τ), the integrand of A(a) is a 2 F, sin(a )(τ) 2, which is bounded by /4. Hence, A (a) 3K/8. For A (a), we use the fact that: ( sin τ τ a sin a ) 2 + (cos τ cos a) 2 4(τ 2 + ), and τ 2 a 2 τ 2 /2. Hence A (a) 6 2 τ 2 + τ 4 η(τ)dτ 8K. (b) We first prove that there exist a constant C 4 > such that η(a) A(a) C 4, for any a >. (49) a2 Let ε > be arbitrary (to be chosen later). Assume that η is non-increasing. (The argument for η non-decreasing is similar.) Since the integrand is nonnegative, A(a) is bounded below by the integral over the region [, ( + ε)a]. In this region, using the fact that η is non-increasing and condition (C), we see that η(τ) η(( + ε)a) c ε η(a) for some constant c ε >. Hence, A(a) c ε η(a)(j (a) J 2 (a)), where J (a) = J 2 (a) = [ ( (τ 2 a 2 ) 2 sin τ τ ) ] 2 a sin a + (cos τ cos a) 2 dτ [ ( (τ 2 a 2 ) 2 sin τ τ ) ] 2 a sin a + (cos τ cos a) 2 dτ. (+ε)a 25

26 Using Plancherel s theorem and the fact that (sin x)/x /2 for any x > 2, J (a) = a 2 F, sin(a )(τ) 2 dτ = a 2 2π sin 2 (as)ds = π [ a 2 sin(2a) ] π 2a 2a 2. On the other hand, using (47), we obtain that: J 2 (a) 2 a (+ε)a (τ a) 2 dτ = 2 a εa τ 2 dτ = 2 εa 2. Choose ε > such that C ε := π 2 2 ε >. Hence, A(a) c εc ε a 2 η(a). This concludes the proof of (49). We will now prove that there exists a constant C 5 > such that: A(a) C 5 a2 + τ 2 + a 2 η(τ)dτ, for all a >. (5) + If a >, relation (5) follows from (49) and Lemma B.2 (Appendix B), since Assume that a. Since a2 + A(a) C 4 η(a) a 2 C 4 M a τ 2 + a 2 + η(τ)dτ τ 2 + a 2 η(τ)dτ. τ 2 + η(τ)dτ = K 2, it suffices to prove that there exists a constant K 2 > such that A(a) K 2. We use the fact that cos is decreasing on [, π 2 ]. Let < c < d < π 2 be such that cos c < 2 cos. Since the integrand of A(a) is non-negative, the integral is bounded below by the integral over the region [c, d]. For any τ in this region, (τ 2 a 2 ) 2 2(τ 4 + a 4 ) 2(d 4 + ) and Hence, A(a) (cos τ cos a) 2 2 cos2 a cos 2 τ 2 cos2 cos 2 c =: M >. d c (τ 2 a 2 ) 2 (cos τ cos M a)2 η(τ)dτ 2(d 4 + ) d This concludes the proof of (5) and the proof of the lemma. elation (45) and Lemma 6.2 give the following result. c η(τ)dτ =: K 2. (5) 26

27 Theorem 6.3 Let {X(ϕ); ϕ D C ( d+ )} be a real stationary random distribution with spectral measure Π. Assume that Π = ν µ where ν is a symmetric measure on satisfying () and µ is a symmetric tempered measure on d. Suppose that ν(dτ) = η( τ )dτ and the function η satisfies (C). In addition, suppose that η satisfies either (C) or (C2). Then for any t > and ξ d, where C () t and C (2) t (44), and N(ξ) = tc (2) t N(ξ) N t (ξ) tc () t N(ξ), (52) are positive constants depending on t, N t (ξ) is given by Ψ(ξ) + τ 2 + Ψ(ξ) + ν(dτ). Consequently, equation (4) has a random field solution if and only if (6) holds, i.e. Ψ(ξ) + τ 2 ν(dτ)µ(dξ) <. + Ψ(ξ) + d The next result gives an alternative condition for the existence of the random field solution to (4). Corollary 6.4 Under the conditions of Theorem 6.3, for any t > and ξ d, K (2) η( + Ψ(ξ)) t + Ψ(ξ) N t (ξ) K () t η( + Ψ(ξ)), + Ψ(ξ) where K () t and K (2) t are positive constants depending on t. Consequently, equation (4) has a random field solution if and only if η( + Ψ(ξ)) µ(dξ) <. (53) + Ψ(ξ) d Proof: Using the upper bound in (52) and Lemma B.2 (Appendix B), we get: N t (ξ) tc () t η( + Ψ(ξ)) M. + Ψ(ξ) For the lower bound, let a = Ψ(ξ) and b = + Ψ(ξ). Assume first that at >. Similarly to (49), one can show that in this case N t (ξ) C 4 t η(a) a 2. Note that a b t 2 + a. Using the fact that η is non-increasing (or η is non-decreasing and satisfies (C)), we obtain that η(a)/a 2 K,t η(b)/b 2, where K,t is a constant depending on t. Assume next that at. Similarly to (5), one can show that in this case, t4 N t (ξ) M d 4 + d c 27 η(τ)dτ =: K 2,t.

Linear SPDEs driven by stationary random distributions

Linear SPDEs driven by stationary random distributions Linear SPDEs driven by stationary random distributions aluca Balan University of Ottawa Workshop on Stochastic Analysis and Applications June 4-8, 2012 aluca Balan (University of Ottawa) Linear SPDEs with

More information

The stochastic heat equation with a fractional-colored noise: existence of solution

The stochastic heat equation with a fractional-colored noise: existence of solution The stochastic heat equation with a fractional-colored noise: existence of solution Raluca Balan (Ottawa) Ciprian Tudor (Paris 1) June 11-12, 27 aluca Balan (Ottawa), Ciprian Tudor (Paris 1) Stochastic

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Hölder continuity of the solution to the 3-dimensional stochastic wave equation

Hölder continuity of the solution to the 3-dimensional stochastic wave equation Hölder continuity of the solution to the 3-dimensional stochastic wave equation (joint work with Yaozhong Hu and Jingyu Huang) Department of Mathematics Kansas University CBMS Conference: Analysis of Stochastic

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Moments for the parabolic Anderson model: on a result by Hu and Nualart.

Moments for the parabolic Anderson model: on a result by Hu and Nualart. Moments for the parabolic Anderson model: on a result by Hu and Nualart. Daniel Conus November 2 Abstract We consider the Parabolic Anderson Model tu Lu + uẇ, where L is the generator of a Lévy Process

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

From Fractional Brownian Motion to Multifractional Brownian Motion

From Fractional Brownian Motion to Multifractional Brownian Motion From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December

More information

21 Gaussian spaces and processes

21 Gaussian spaces and processes Tel Aviv University, 2010 Gaussian measures : infinite dimension 1 21 Gaussian spaces and processes 21a Gaussian spaces: finite dimension......... 1 21b Toward Gaussian random processes....... 2 21c Random

More information

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2)

WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS (0.2) WELL-POSEDNESS FOR HYPERBOLIC PROBLEMS We will use the familiar Hilbert spaces H = L 2 (Ω) and V = H 1 (Ω). We consider the Cauchy problem (.1) c u = ( 2 t c )u = f L 2 ((, T ) Ω) on [, T ] Ω u() = u H

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock WEYL S LEMMA, ONE OF MANY Daniel W Stroock Abstract This note is a brief, and somewhat biased, account of the evolution of what people working in PDE s call Weyl s Lemma about the regularity of solutions

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term 1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

MATH 220: MIDTERM OCTOBER 29, 2015

MATH 220: MIDTERM OCTOBER 29, 2015 MATH 22: MIDTERM OCTOBER 29, 25 This is a closed book, closed notes, no electronic devices exam. There are 5 problems. Solve Problems -3 and one of Problems 4 and 5. Write your solutions to problems and

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Jae Gil Choi and Young Seo Park

Jae Gil Choi and Young Seo Park Kangweon-Kyungki Math. Jour. 11 (23), No. 1, pp. 17 3 TRANSLATION THEOREM ON FUNCTION SPACE Jae Gil Choi and Young Seo Park Abstract. In this paper, we use a generalized Brownian motion process to define

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

1 Fourier Integrals of finite measures.

1 Fourier Integrals of finite measures. 18.103 Fall 2013 1 Fourier Integrals of finite measures. Denote the space of finite, positive, measures on by M + () = {µ : µ is a positive measure on ; µ() < } Proposition 1 For µ M + (), we define the

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Stable Lévy motion with values in the Skorokhod space: construction and approximation

Stable Lévy motion with values in the Skorokhod space: construction and approximation Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv:1809.02103v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we

More information

Solutions to Tutorial 11 (Week 12)

Solutions to Tutorial 11 (Week 12) THE UIVERSITY OF SYDEY SCHOOL OF MATHEMATICS AD STATISTICS Solutions to Tutorial 11 (Week 12) MATH3969: Measure Theory and Fourier Analysis (Advanced) Semester 2, 2017 Web Page: http://sydney.edu.au/science/maths/u/ug/sm/math3969/

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)

n [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1) 1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

212a1416 Wiener measure.

212a1416 Wiener measure. 212a1416 Wiener measure. November 11, 2014 1 Wiener measure. A refined version of the Riesz representation theorem for measures The Big Path Space. The heat equation. Paths are continuous with probability

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

A Unified Formulation of Gaussian Versus Sparse Stochastic Processes

A Unified Formulation of Gaussian Versus Sparse Stochastic Processes A Unified Formulation of Gaussian Versus Sparse Stochastic Processes Michael Unser, Pouya Tafti and Qiyu Sun EPFL, UCF Appears in IEEE Trans. on Information Theory, 2014 Presented by Liming Wang M. Unser,

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS

ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS PORTUGALIAE MATHEMATICA Vol. 55 Fasc. 4 1998 ON THE PATHWISE UNIQUENESS OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS C. Sonoc Abstract: A sufficient condition for uniqueness of solutions of ordinary

More information

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations

ITO OLOGY. Thomas Wieting Reed College, Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations ITO OLOGY Thomas Wieting Reed College, 2000 1 Random Processes 2 The Ito Integral 3 Ito Processes 4 Stocastic Differential Equations 1 Random Processes 1 Let (, F,P) be a probability space. By definition,

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials. Hermite Equation In the study of the eigenvalue problem of the Hamiltonian for the quantum harmonic oscillator we have

More information

Existence of density for the stochastic wave equation with space-time homogeneous Gaussian noise

Existence of density for the stochastic wave equation with space-time homogeneous Gaussian noise Existence of density for the stochastic wave equation with space-time homogeneous Gaussian noise Raluca M. Balan Lluís Quer-Sardanyons Jian Song May 15, 218 Abstract In this article, we consider the stochastic

More information

A New Wavelet-based Expansion of a Random Process

A New Wavelet-based Expansion of a Random Process Columbia International Publishing Journal of Applied Mathematics and Statistics doi:10.7726/jams.2016.1011 esearch Article A New Wavelet-based Expansion of a andom Process Ievgen Turchyn 1* eceived: 28

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

Banach Spaces V: A Closer Look at the w- and the w -Topologies

Banach Spaces V: A Closer Look at the w- and the w -Topologies BS V c Gabriel Nagy Banach Spaces V: A Closer Look at the w- and the w -Topologies Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we discuss two important, but highly non-trivial,

More information

WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION

WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION Communications on Stochastic Analysis Vol. 1, No. 2 (27) 311-32 WHITE NOISE APPROACH TO THE ITÔ FORMULA FOR THE STOCHASTIC HEAT EQUATION ALBERTO LANCONELLI Abstract. We derive an Itô s-type formula for

More information

THE RADON NIKODYM PROPERTY AND LIPSCHITZ EMBEDDINGS

THE RADON NIKODYM PROPERTY AND LIPSCHITZ EMBEDDINGS THE RADON NIKODYM PROPERTY AND LIPSCHITZ EMBEDDINGS Motivation The idea here is simple. Suppose we have a Lipschitz homeomorphism f : X Y where X and Y are Banach spaces, namely c 1 x y f (x) f (y) c 2

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Modelling energy forward prices Representation of ambit fields

Modelling energy forward prices Representation of ambit fields Modelling energy forward prices Representation of ambit fields Fred Espen Benth Department of Mathematics University of Oslo, Norway In collaboration with Heidar Eyjolfsson, Barbara Rüdiger and Andre Süss

More information

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS J. Appl. Prob. 48, 561 568 (2011) Printed in England Applied Probability Trust 2011 CONDITIONAL FULL SUPPOT OF GAUSSIAN POCESSES WITH STATIONAY INCEMENTS DAIO GASBAA, University of Helsinki TOMMI SOTTINEN,

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

MA8109 Stochastic Processes in Systems Theory Autumn 2013

MA8109 Stochastic Processes in Systems Theory Autumn 2013 Norwegian University of Science and Technology Department of Mathematical Sciences MA819 Stochastic Processes in Systems Theory Autumn 213 1 MA819 Exam 23, problem 3b This is a linear equation of the form

More information

A review: The Laplacian and the d Alembertian. j=1

A review: The Laplacian and the d Alembertian. j=1 Chapter One A review: The Laplacian and the d Alembertian 1.1 THE LAPLACIAN One of the main goals of this course is to understand well the solution of wave equation both in Euclidean space and on manifolds

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS The Annals of Applied Probability 2001, Vol. 11, No. 1, 121 181 SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS By Maury Bramson 1 and Joel L. Lebowitz 2 University of

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Malliavin differentiability of solutions of SPDEs with Lévy white noise

Malliavin differentiability of solutions of SPDEs with Lévy white noise Malliavin differentiability of solutions of SPDEs with Lévy white noise aluca M. Balan Cheikh B. Ndongo February 2, 217 Abstract In this article, we consider a stochastic partial differential equation

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

Matched Filtering for Generalized Stationary Processes

Matched Filtering for Generalized Stationary Processes Matched Filtering for Generalized Stationary Processes to be submitted to IEEE Transactions on Information Theory Vadim Olshevsky and Lev Sakhnovich Department of Mathematics University of Connecticut

More information

Fourier analysis, measures, and distributions. Alan Haynes

Fourier analysis, measures, and distributions. Alan Haynes Fourier analysis, measures, and distributions Alan Haynes 1 Mathematics of diffraction Physical diffraction As a physical phenomenon, diffraction refers to interference of waves passing through some medium

More information

On an uniqueness theorem for characteristic functions

On an uniqueness theorem for characteristic functions ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information