A Class of Lévy SDE s: Kinematics and Doob Transformation

Size: px
Start display at page:

Download "A Class of Lévy SDE s: Kinematics and Doob Transformation"

Transcription

1 UNIVERSITÀ DEGLI STUDI DI BARI Dottorato di Ricerca in Matematica XXII Ciclo A.A. 2009/20010 Settore Scientifico-Disciplinare: MAT/06 Probabilità Tesi di Dottorato A Class of Lévy SDE s: Kinematics and Doob Transformation Candidato: Andrea ANDRISANI Supervisore della tesi: Prof. N. CUFARO PETRONI Coordinatore del Dottorato di Ricerca: Prof. L. LOPEZ

2

3 Contents Preface 5 1 Stochastic mechanics Introduction Kinematics in stochastic mechanics Velocities Diffusions Acceleration Dynamics in stochastic mechanics Lévy processes Introduction Definitions Lévy measures Lévy-Itô decomposition Infinite divisible distributions Stable and self-decomposable distributions Lévy processes as Markov processes Lévy processes as semimartingales SDE for Lévy processes and semimartingales Kinematics of solutions of Lévy SDE Introduction Forward and Backward Stochastic Velocities Introductory remarks Forward and Backward Stochastic Velocities Example: The Free Case Continuity Equation and Acceleration

4 4 Contents 3.4 The Ornstein-Uhlenbeck Equation What about dynamics? Schrödinger equation and Markov processes Introduction Analytic tools The forward and backward semigroups Dirichlet forms and semigroups Doob type transformations The Hamilton operator associated to X t Examples Conclusions 97 A Proof of formula (3.2.8) 99

5 Preface In the present thesis I study a possible extension of stochastic mechanics from the usual Wiener process to generic Lévy processes as source of fluctuations. Stochastic mechanics started with the pioneering work of Nelson in the sixties, [48, 49], with subsequent contributions from Carlen, Guerra, Morato, Yasue, Zambrini among others. For a large class of Markov process it defines typical kinematics quantities like forward and backward stochastic velocities, together with stochastic acceleration. All these definitions are reduced to those of classical kinematics if processes are actually deterministic. Then, defining a dynamics either by means of the second Newton s law, or by a suitable variational principle, it s possible to show that for processes obeying a SDE of the type dx t = b(x t, t)dt + dw t where W t is a Wiener process and b(x, t) a suitable function, the density function ρ(x, t) of X t can be given through a wave function ψ(x, t), solution of the Schrödinger equation of quantum mechanics i t ψ = Hψ, by means of the relation ρ(x, t) = ψ(x, t) 2, often called in physics the Born principle. The potential function appearing in the Hamilton operator H will then be the same that determines the stochastic acceleration in the second Newton s law. Since Nelson s publications there was a debate if to every solution of a Schrödinger equation it was possible to associate a diffusion process in the way described above. The positive answer was given by Carlen [7] in 1985, and we will try to trod along the same path. Lévy processes can be considered as a generalization of the well known Wiener processes. They are defined by few conditions: they are processes with independent and stationary increments, and they are also stochastically continuous. Among them we find many classes of well known processes as the Poisson, the Cauchy, the Delta, the t-student processes, and - of course - the

6 6 Preface Wiener process, which is the unique Lévy process with Gaussian distributed increments. The Lévy processes have several other nice properties: all of them are Markov processes, and in particular Feller processes; its always possible to find versions of them which have cad-lag (continuous on the right with left limits) paths; besides if they are integrable with null expectation then they are martingales. In general, however, all Lévy processes are semimartingales, i.e. processes that can be used as integrators in Ito integrals. It is important to remark that between Lévy processes and infinite divisible distributions there is a one-to-one correspondence. Finally we remember that the only Lévy process with a.s.-continuous paths is the Wiener process. Lévy processes were first studied by the French mathematician Paul Lévy ( ), and are now widely applied in a variety of mathematical fields, from beam halos [12, 13] to mathematical finance [53]. Some references about Lévy processes can be found in Sato [59], Applebaum [5] and Protter [54] among others. The Doob transformation [20, 60, 56] - that will be instrumental about an association of processes to wave functions solutions of a Lévy-Schrödinger equation - acts on a Markov process X t with infinitesimal generator L in the following way: given a positive function h in the domain of L with Lh = 0, then we define a new Markov process Xt h having as infinitesimal generator L h such that L h g = h 1 L(hg). We then introduce the Doob-type transformations, namely transformations of the same kind, but with h a generic not vanishing function. The aim of this research essentially is to check a Cufaro and Pusterla conjecture [14, 15]. They suggested that from a stochastic mechanics with fluctuations generated by a Lévy (in general not a Wiener) process, it would be possible to deduce an evolution equation similar to the Schrödinger one, with a free part of the Hamilton operator H where the infinitesimal generator of a Wiener process is replaced by the infinitesimal generator of the Lévy process. In formula, given ψ a square integrable complex function, [Hψ](x) := 1 2 ψ(x) [ψ(x + y) ψ(x)] l(dy) + V (x)ψ(x) where for a Borel set B R n, 0 / B, l(b) measures the mean number of jumps with size in B that the Lévy process performs in the unit time. l is called a Lévy measure.

7 7 Initially I focused my attention on the Lévy-type processes, i.e. the solutions of the SDE dx t = b(x t, t)dt + Z t, where Z t is a generic Lévy process. However after defining the kinematics for these processes, and after assuming that forces in the dynamic equations were always given by the gradient of a potential V, i have found some difficulty to link a process solution of the dynamic equation with the wave functions solutions of a Lévy-Schrödinger equation by means of the Born relation. In order to bypass this standstill Prof. Morato pointed out that in the Wiener case it was possible to find the Hamilton operator from the forward and backward generators of a diffusion process also by means of a Doob-type transformation, without apparently involving a dynamics [1, 2, 3, 47]. She then suggested to use these mathematical tools also in the Lévy-Schrödinger case, and i eventually found the way to link a suitable class of Markov processes with the solutions of the Lévy-Schrödinger equations under scrutiny. In this way it was possible to construct Markov processes connected with the solutions of the Lévy-Schrödinger equation hypothesized by Cufaro and Pusterla, and in particular I have found that - apart from the trivial free case - these processes are not Lévy-type processes, even if they are closely connected to them. These results show that there is a deep connection between Lévy processes and the newly proposed Lévy-Schrödinger equations, and we are interested now in implementing this association by means of a dynamics principle, as done in the past by Nelson, Guerra, Morato and Yasue in the Wiener case. Future research will then be devoted to reach this aim together with a more detailed study of these processes based on the solutions of the Cufaro and Pusterla version of the Schrödinger equation. This thesis is structured as follows: in the first chapter there is a brief review about stochastic mechanics. In the second chapter I introduce the Lévy processes along with their main properties. In the third chapter I have studied the kinematics of Lévy-type processes ruled by dx t = b(x t, t)dt + dz t, the expressions for the forward and backward stochastic velocities associated to it, together with their acceleration. I have analyzed, as particular example, the Ornstein-Uhlenbeck equation dx t = cx t + dz t, when (Z t ) t 0 is a stable Lévy process with 0 < α 2 and I have found explicit expressions for its kinematic quantities. In the fourth chapter I start with a short review of Markov processes focusing my attention on the associated semigroups and their infinitesimal generators. I also give a quick review on Dirichlet forms which are useful in the discussion. Then by means of a Doob-type transformation I associate a Markov process (X t ) t R - satisfying some mild conditions - to

8 8 Preface a self-adjoint, bounded from below operator H in the Hilbert space of complex, square integrable functions. The function ψ that I used in the Doob-type transformation shows then a double nature: it is a solution of the Schrödingertype equation i t ψ = Hψ, where H plays the role of the Hamilton operator, and verifies also the Born relation ρ(x, t) = ψ(x, t) 2, where ρ is the density function of (X t ) t R. I consider finally some examples showing that H is the usual Hamilton operator of quantum mechanics if (X t ) t R is a Wiener process, and that there are processes - derived by general Lévy processes - that admits as H the operator hypothesized by Cufaro and Pusterla in [14, 15]. The conclusions of my inquiry are summarized in the chapter five, while an Appendix relates a few calculation details. I wish to express my gratitude to Prof. Nicola Cufaro Petroni, who introduced me in the world of stochastic mechanics and of Lévy processes, and who patiently supervised my thesis, and to Prof. Laura Morato of University of Verona, who guided my researches with suggestions, constructive criticisms, adjourned me on the state of the art about various arguments treated in this thesis, and helped me in making my permanence in Verona comfortable and enjoyable. I wish also to mention Prof. De Martino and Prof. De Siena of University of Salerno, and Prof. Pusterla of University of Padova, with whom i spent pleasant time with stimulating conversations both on mathematics and physics and about facts of life. Finally, I express my thanks to the Department of Informatica - Università degli Studi di Verona - that accomodated me during my out of office PhD period.

9 Chapter 1 Stochastic mechanics 1.1 Introduction Stochastic mechanics was born as a tentative to understand microscopic world and predictions of quantum mechanics in an objective prospective, i.e. by means of trajectories, so attributing elements of physical reality to particle. Actually, stochastic mechanics says much more, since it proves that dynamics in quantum phenomena is still governed by classical Newton s laws, if one takes the accuracy of change the definitions of kinematics quantity as velocity and acceleration, in order to take account of the fluctuations to which microscopic objects are supposed to be subjected. Stochastic mechanics was born from Nelson s works [48, 49, 50], even if Fènyes [27] was the first to interpret quantum mechanics in terms of stochastic processes. Is also to be mentioned an article of Bohm and Vigier [6], who hypothesize the presence of a noise affecting the motion of microscopical particles. Other important contribution to the theory came from Albeverio, Carlen, Föllmer, Guerra, Morato, Yasue, Zambrini, and so on [1, 2, 7, 28, 32, 62, 63, 65, 66]. The theory can be considered as a success, since it essentially gives the same physical predictions of the Schrödinger equation [7]; besides can take into account spin presence [19, 26], symmetric properties of Bose particles [50], and it finds applications in the fields theory contests [32, 68], if one choices opportune manifolds in which stochastic processes live (here for simplicity i ll suppose that all processes takes value in R n ). However, like any other realistic theory compatible with quantum mechanics, it presents some non-local effects, disliking a part of physics community. The same Nelson posed some

10 10 Chap. 1 - Stochastic mechanics objections to his theory as a physical theory [51], even if some of them were confuted [9]. From a mathematical point of view, noises in stochastic mechanics are essentially given by Wiener processes. However, there are works in which more general Lévy processes are involved: between them we cite Garbaczewski and Olkiewicz [30, 31], Cufaro and Pusterla [14, 15], and Ichinose et al. [36, 37]. The chapter is organized as follow; in Section 1.2 i first define, following Nelson definitions [49], forward and backward mean velocity for a wide class of stochastic processes, and i investigate about their mutual relation in particular for diffusions processes. Then i define the meam acceleration. In Section 1.3 i show how can Schrödinger equation can be deduced once dynamics, by means of the second Newton s law or by variational methods, are applied to a wide class of Markov diffusion processes. 1.2 Kinematics in stochastic mechanics Velocities In this section we define the probabilistic counterpart of the typical kinematic quantities of deterministic mechanics, i.e. position, velocities and acceleration. We will essentially follow the approach given by Nelson in [49, 50]. As the position, we will consider an adapted stochastic process (X t ) t I, defined in filtered probability space (Ω, F, P, (F t ) t I ), where I R is an interval, that takes value in R n and such that 1. E{ X t } <, i.e. X t L 1 (Ω), t I. 2. the function t X t L 1 (Ω) is continuous. Then we define the mean forward derivative (or mean forward velocity) as the process DX t := 1 lim t 0 + t E {X t+ t X t F t } (1.2.1) when this limit exists in L 1 (Ω) for every t I. Some explanations occur for this definition. Since some basilar processes in probability like Wiener processes have high irregular patterns, even if they are continuous almost everywhere, in defining the concept of time derivation for stochastic processes it would be too restrictive to limit to a process with

11 1.2 - Kinematics in stochastic mechanics 11 smooth paths, in order to perform a classical derivation for each trajectory t X t (ω), at varying of ω Ω. On the other hand, a definition of the type DX t := lim E{X t+ t X t }/ t would be too coarse and we will eventually loose a lot of useful information. Indeed, in this case, the derivative would be a simple number, instead of a process, and we would loss the fluctuation component of it. So we need something between these two options and equation (1.2.1) is what we need. In fact, if for simplicity we suppose that (X t ) t I is a Markov process, then for t I { } DX t = lim E (Xt+ t X t ) t 0 + t X t and given x R n, with lim t 0 + E{(X t+ t X t )/ t X t = x} we perform a limit in mean of (X t+ t X t )/ t for t which goes to 0 +, but just for trajectories that pass at x at time t. Remarks i) Definition (1.2.1) is not uncommon in probability theory and stochastic processes: if (X t ) t 0 is a time-homogeneous Markov process, and if (A, D(A)) is the infinitesimal generator 1 associated to (X t ) t 0, then for every function f D(A), the process [f(x t )] t 0 admits a mean forward derivation and Df(X t ) = [Af](X t ) However it must not to be confused with Malliavin derivation [46], which is another type of derivation in probability. ii) Every martingale (M t ) t 0 w.r.t. (F t ) t 0 admits a mean forward derivation and DM t = 0 iii) If (X t ) t I admits a L 1 derivation, i.e. exists in L 1 (Ω) the limit 1 lim t 0 + t E{X t+ t X t } (1.2.2) then (X t ) t I admits a forward mean derivation which is equal to (1.2.2). 1 see Chap. 4 for a brief review about infinitesimal generators associated to Markov processes

12 12 Chap. 1 - Stochastic mechanics Notice that from the mean forward derivation (DX t ) t I we can reconstruct the original process (X t ) t I, at least of a martingale. Indeed, given a, b I, we can prove [49] that { b } E {X b X a F a } = E DX s ds F a a So if t 0 is the lower extreme of I, defined (1.2.3) we have t M t = X t DX r dr t 0 t s E {M t F s } = E{X t DX r dr F s } = X s t 0 DX r dr = M s t 0 We have said that conditional expectation smooths the irregularities of the patterns, and so that we can perform a derivation. However this isn t enough to neglect a distinction between right and left derivatives; so, together with the mean forward derivative, that essentially is a right derivative, we have to consider the mean backward derivative, i.e. the left side counterpart. In order to do this, we need to define another family of σ-algebras F t F, at varying of t I, such that F s F t, if s t So the family ( F t ) t I is a backward filtration, and, as the element of F t can be interpreted as the events that have occurred in the past, until time t, now the elements of Ft may be considered the events that will happen in the future beyond time t. So, given a process (X t ) t I which satisfies 1 and 2, but adapted to ( F t ) t I, then the mean backward derivative or velocity DX t is defined as DX t := 1 lim t 0 + t E{X t X t t F t } (1.2.4) with the limit taken in L 1 (Ω) as before. Obviously an analogue of (1.2.3) is true even for the mean backward derivation. Other velocities can be defined. If (X t ) t I is a process that admits forward and backward mean velocities, we can take the arithmetic mean of DX t and DX t, and define a new velocity, called the current velocity v t

13 1.2 - Kinematics in stochastic mechanics 13 v := DX t + DX t 2 while if we take the difference, divided by 2, we get the so called osmotic velocity u t u := DX t DX t 2 In a confrontation with deterministic mechanics, current velocity takes the role of ordinary velocity; for example it changes sign for time reversion. On the other hand the osmotic velocity is essentially specific for stochastic mechanics; it is meaningless in ordinary mechanics where it will usually be zero. Contrary to current velocity, the osmotic velocity doesn t change the sign in time reversion. Obviously, there is a strict correlation between forward and backward stochastic mean velocity; first, taken (X t ) t I a Markov process, if we consider the time-reversed process ( X t ) t Ĩ, where t Ĩ if t I and X t = X t, which is still Markov, then D X 1 t = lim t 0 + t E{ X t+ t X t X t } 1 = lim t 0 + t E{X t t X t X t } = [ DX] t So, a part some minus sign, the forward mean velocity of a Markov process is the backward mean velocity of its time-reversed process, and vice-versa. Besides this obvious relation, there is the following theorem which is very important: Theorem Let (X t ) t I and (Y t ) t I be two processes defined on the same filtered probability space (Ω, F, P, (F t ) t I ) and continuous in L 2 (Ω). Suppose moreover that DX t and DY t both exist and are continuous in L 2 (Ω). Then for every a, b I, a b, we have E{X b Y b X a Y a } = b a E{Y t DX t + X t DYt }dt. (1.2.5) Proof. To prove (1.2.5), divide [a, b] in m subinterval of length δ = (b a)/m. Then defined t 0, t 1,..., t m with t 0 = a, t i = δ + t i 1 and t m = b we have that

14 14 Chap. 1 - Stochastic mechanics m 1 lim m i=1 m 1 lim m i=1 m 1 E{X b Y b } E{X a Y a } = lim E{X ti+1 Y ti X ti Y ti 1 } = m { E (X ti+1 X ti ) Y t i + Y ti 1 2 { Xti+1 X ti Y ti + Y E ti 1 t i+1 t i 2 i=1 + X t i+1 + X ti (Y ti Y ti 1 ) 2 + X t i+1 + X ti 2 b a } = } Y ti Y ti 1 δ = t i+1 t i E{Y t DX t + X t DYt }dt There is another integration by part formula; however, in order to show it, we have to introduce the quadratic variation processes and the quadratic covariation processes. Let (X t ) t I a stochastic process in a filtered probability space (Ω, F, P, (F t ) t I ). Then, if t 0 is the lower extreme of I, we define the quadratic variation process ([X, X] t ) t I as a matrix process with components k 1 [X i, X j ] t = Xt lim (X i δ 0 + t l+1 Xt i l )(X j t l+1 X j t l ) (1.2.6) l=0 when the limit exists in L 1 (Ω) Here t 0 t 1... t k = t realize a partition of [t 0, t], while δ = max i {t i+1 t i }. From (1.2.6) one can verify that in one dimension [X, X] is an increasing process P-a.s., so it can be used as integrator for a Lebesgue-Stieltjes integral. We note that in literature the differential term d[x, X] t is often indicated with the symbol (dx t ) 2, which makes clear the choice of the name. As a simple example, in almost every books on stochastic processes is proved that for a one dimensional Wiener process (W t ) t 0 [W, W ] t = t while if (N t ) t 0 is a Poisson process, then [N, N] t = N t If (Y t ) t I is another stochastic process in (Ω, F, P, (F t ) t I ), then one can define the quadratic covariation process between (X t ) t I and (Y t ) t I as

15 1.2 - Kinematics in stochastic mechanics 15 k 1 [X i, Y j ] t = X a0 Y a0 + lim (X i δ 0 + t l+1 Xt i l )(Y j t l+1 Y j t l ) (1.2.7) l=0 From (1.2.6) and (1.2.7) one can verify easily that [X, Y ] t = [Y, X] t and [X, Y ] t = 1 2 ([X + Y, X + Y ] t [X, X] t [Y, Y ] t ) So, being the difference of two increasing process, the quadratic covariation is a finite variation process, and it can be also used as an integrator function for Lebesgue-Stieltjes integrals. d[x, Y ] t is often denoted as dx t dy t. For a more detailed discussion about quadratic variations and its properties, see Protter [54] for example. At this point, given the hypothesis of Theorem 1.2.2, and supposing that X and Y admits an integrable quadratic covariation in [a, b], we have that E{X b Y b X a Y a } = b a E{Y t DX t + X t DY t }dt + E{[X, Y ] b [X, Y ] a } (1.2.8) The proof of (1.2.8) follows easily by the proof of the Theorem by noting that (X ti+1 Y ti+1 X ti Y ti ) = X ti (Y ti+1 Y ti ) + Y ti (X ti+1 X ti ) + (Y ti+1 Y ti )(X ti+1 X ti ) Then by (1.2.5) and (1.2.8) we obtain that b a E{X t DYt }dt = b a E{X t DY t }dt + E{[X, Y ] b [X, Y ] a } (1.2.9) Before going into the definition of the acceleration, we want to explicit these relations between forward and backward velocities for an important class of stochastic processes, the smooth Markovian diffusion processes, that we will analyze in the following paragraph. In the following we will see an explicit expression for u t, for some particular diffusion processes, that will make clear the name of it.

16 16 Chap. 1 - Stochastic mechanics Diffusions We define a smooth Markov diffusion as follow: given a probability space (Ω, F, P), an a.s. continuous Markov process (X t ) t I, with value in R n, is a smooth diffusion with coefficient σ ij (x, t), b i (x, t) and b i (x, t) for 1 i, j n, if the mean forward and backward velocities of (X t ) t I exist and DX t = b(x t, t), DXt = b(x t, t) (1.2.10) d[x, X] t = σ(x t, t)dt P a.s. (1.2.11) Here σ ij, b i and b i are all smooth functions, σ ij is bounded and is the entries of a strictly positive definite square matrix. Essentially by Lévy theorem 2 it can be shown that they are solutions of SDE of the type dx t = b(x t, t)dt + σ 1 2 (Xt, t) dw t (1.2.12) where (W t ) t I is a Wiener process in R n, w.r.t. the filtration (F t ) t I, with F t := σ{x s : s I, s t}, and σ 1 2 is a square matrix function such that for every x R n and t I Besides (X t ) t I satisfies σ 1 2 (x, t) σ 1 2 (x, t) = σ(x, t) dx t = b(x t, t)dt + σ 1 2 (Xt, t)d W t where now ( W t ) t I is a reversed Wiener process, i.e. a Wiener process w.r.t. the backward filtration ( F t ) t I, where F t := σ{x u : u I, u t}. Notice the following important result due to Föllmer [28], which states when a solution of the stochastic differential equation dx t = b(x t, t)dt + dw t (1.2.13) actually is a diffusion in the sense given above (namely, when a process X t, solution of (1.2.13) admits an integrable mean backward derivative, for a given 2 Lévy theorem states that a stochastic process (X t) t 0 actually is a standard Wiener process if and only if it is a continuous local martingale with [X, X] t = t.

17 1.2 - Kinematics in stochastic mechanics 17 function b). It results that if the process (b(x t, t)) t I satisfies the following relation { } E b(x t, t) 2 dt < I then (X t ) t I admits a mean backward derivation, and it is a diffusion in the sense given above 3. Now given f C 0 (Rn I), i.e. a infinite derivable function with compact support, by Itô s formula for continuous processes f(x t ) f(x t0 ) = t 0 f(x s ) dx s t 0 2 i,jf(x s )d[x i, X j ] s (1.2.14) repeated indexes stay for a sum and by the well known properties of quadratic covariation [54], we have Df(X t, t) = [ t + b(x t, t) + 1 ] 2 σij (X t, t) i,j 2 f(x t, t) (1.2.15) and Df((X t, t)) = [ t + b(x t, t) 1 ] 2 σij (X t, t) i,j 2 f(x t, t) (1.2.16) The operators b i i σij 2 i,j, bi i 1 2 σij 2 i,j are called respectively the forward and backward diffusion operators. They give insight about the time evolution of ρ(dx, t), the distribution function of X t. Let us for simplicity suppose that σ ij = νδ ij, with ν > 0. For every function f C0 (Rn I) i.e. a function with compact support in I for the time variable t by applying (1.2.5) for g = 1, we have d 0 = I dt E {f(x t, t)} dt = E {Df(X t, t)} dt I and by (1.2.15) 3 A more detailed discussion about Föllmer s results will be given in Remark

18 18 Chap. 1 - Stochastic mechanics 0 = I E {[ t + b + ν ] } 2 f(x t, t) dt = R n I [ t + b + ν 2 ] f(x, t) ρ(dx, t)dt By integrating by part we obtain that the measure ρ is a weak solution of the following equation t ρ = 1 ρ (bρ) (1.2.17) 2 called the forward Fokker-Planck equation. Of course ρ will also satisfy a similar equation for the backward process. This is the backward Fokker-Planck equation: t ρ = ν 2 ρ ( bρ) (1.2.18) (1.2.17) is a parabolic equation, with coefficient b which is a smooth function. From the theory of parabolic equations, (see [25] for example), we know that (1.2.17) admits as solutions probability measures which are absolutely continuous, w.r.t. the Lebesgue measure, for every t I, and their density functions, that we call ρ(x, t), are smooth both in x and t. It can be shown that equations analogous to (1.2.17) and (1.2.18) are satisfied also by the forward and the backward transition probability functions respectively, p(x, s; dy, t) and p(y, t; dx, s) associated to (X t ) t I. We remember that given s, t I, s t and x, y R n, for a Borel set B p(x, s; B, t) := P(X t B X s = x), p(y, t; B, s) = P(X s B X t = y) Now, using the results of the previous section, we will explicit the relation between the coefficients b i and b i, for the diffusion (X t ) t I. Let us take two functions f, g C 0 (Rn ). Then it s easy to prove that the processes (f(x t )) t 0 and (g(x t )) t 0 verify the hypothesis of Theorem We can apply (1.2.9) and we obtain I E{f(X t ) Dg(X t )}dt = I E{f(X t )Dg(X t )}dt + E { I d[f(x), g(x)] t } (1.2.19)

19 1.2 - Kinematics in stochastic mechanics 19 Then from Itô formula (1.2.14), from (1.2.11) and the properties of quadratic covariation, by (1.2.19) we get I E{f(X t ) Dg(X t )}dt = that we can rewrite as I E{f(X t )Dg(X t )}dt + ν I E{ f(x t ) g(x t )}dt E{f(X t ) Dg(X t )} = E{f(X t )Dg(X t )} + νe{ f(x t ) g(x t )} We make now use of (1.2.15) and (1.2.16) for Dg and Dg. We have ν ] f [ b g R n 2 g ρdx = f [b g + ν ] R n 2 g ρdx + ν f g ρdx R n (1.2.20) But the last integral on the right of (1.2.20), on integration by parts, is equal to ν R n f [ g + ρ ] ρ g ρdx where we set ρ/ρ = 0 if ρ = 0. Then we can rewrite (1.2.20) as R n f ν ] [( [ b g 2 g ρdx = f b ν ρ ) g ν ] R n ρ 2 g ρdx and finally, taking into account that f and g are arbitrary, we obtain b = b ν ρ ρ (1.2.21) This is the relations between forward and backward velocities that we are searching for. By (1.2.21), for the osmotic velocity we have u = b b 2 = ν 2 ρ ρ = ν log ρ 1 2 (1.2.22) So the presence of the typical osmotic pressure term ρ/ρ explains the name for u. About the current velocity v we have

20 20 Chap. 1 - Stochastic mechanics v = b + b 2 = b ν 2 ρ ρ (1.2.23) Note that, by summing forward and backward Fokker-Plank equations (1.2.17) and (1.2.18), and taking into account (1.2.23), we get t ρ = (vρ) (1.2.24) So ρ satisfies also a continuity equation and the role played by the current velocity density is given by v. This explain the terminology for v Acceleration The last quantity to define for completing the kinematics for stochastic processes is the mean acceleration. It is defined by Nelson [50] as a := D DX t + DDX t 2 (1.2.25) Actually other consistent possible choices for a are possible 4. However this definition is justified from variational methods (see [32, 50, 62, 63, 65, 66, 67]). For a diffusion process, supposing b and ρ smooth enough, by (1.2.15) and (1.2.16), a reads a = D b + Db 2 = t ( b + b 2 ) (b b + b b) ( b b 2 ) and making use of current and osmotic velocity v and u we get a = t v + v v u u u (1.2.26) 1.3 Dynamics in stochastic mechanics Once obtained the kinematics, now we can turn to the dynamics. As in deterministic mechanics, dynamics can be defined in stochastic mechanics in two ways: by the second Newton s law and by a variational method. I will show only the first one. About the variational methos in stochastic mechanics, see [32, 48, 62, 63, 66, 67]. 4 See Zambrini [67] about the choice (DDX t + D DX t)/2 for the definition of mean acceleration.

21 1.3 - Dynamics in stochastic mechanics 21 Let us suppose that a mechanical system is represented by a process (X t ) t 0 defined on a probability space (Ω, F, P), that takes value in R n, and adapted with respect to a forward and backward filtration. So X t gives the coordinates of the system at time t. If the system is subject to a field of forces given by a function F, dynamical principle in stochastic mechanics imposes that F (X t, t) = ma = m D DX t + DDX t (1.3.1) 2 where m is the mass of the system. Let us see what are the conseguences of relation (1.3.1). For simplicity suppose that F is a conservative force, i.e. that there exists a smooth function V such that F = V. We apply then (1.2.26) in (1.3.1) so that t v = V m + u u v v + u (1.3.2) So the dynamics equation (1.3.1) reduces to an equation for the current and osmotic velocities, that can be solved only if coupled with this other equation t u = ( v) (v u) (1.3.3) that can be derived by (1.2.22) and by the continuity equation (1.2.24). So at the question what is the diffusion process, for a given initial distribution and initial forward drift, with its mean acceleration fixed by the dynamics equation (1.3.1)?, one can answer by solving equations (1.3.2) and (1.3.3) for given initial conditions of u and v, which by (1.2.22) and (1.2.23) are determined by the initial values of ρ and b. Then by the relation b(x, t) = v(x, t) + u(x, t) which easily follows from the definitions of u and v, one can find the expression for the forward drift b for every time t I. At this point the stochastic differential equation dx t = b(x t, t)dt + dw t is defined, and its solution 5 is the diffusion searched. The couple of equations (1.3.2) and (1.3.3) is non linear, so it could be a difficult task to resolve them. However we will now show how to replace them 5 Actually a weak solution, (see Section for details), since no Wiener process is initially given.

22 22 Chap. 1 - Stochastic mechanics with a singular linear complex one. We have seen in (1.2.22) that the osmotic velocity u can be written as the gradient of a function R u = ρ 2ρ = log ρ1/2 = R (1.3.4) with R = log ρ 1/2. If we suppose 6 that even v can be written as a gradient of a smooth function S, i.e. then we have that v = S (1.3.5) and (1.3.2) and (1.3.3) become u u = 1 2 ( R 2 ) v v = 1 2 ( S 2 ) t S = V ( R 2 S 2 ) + R (1.3.6) t R = S S R (1.3.7) At this point, if we define the complex function ψ = e R+iS (1.3.8) it is straightforward to see that ψ verifies a complex partial differential equation, namely the Schrödinger equation i t ψ = ψ + V ψ (1.3.9) To see this, just substitute (1.3.8) into (1.3.9), divide by ψ itself and then take the real and imaginary part of it; they will correspond respectively to (1.3.6) and (1.3.7). The potential term V that appear in (1.3.9) is the same potential that gives rise to the field of forces F in (1.3.1). We note that ψ 2 = ρ from (1.3.8) and (1.3.4), while from (1.3.4) and (1.3.5) we have 6 Relation (1.3.5 actually can be deduce by variational principles.

23 1.3 - Dynamics in stochastic mechanics 23 b = (R + S) = R ψ ψ, b = (S R) = I ψ ψ

24 24 Chap. 1 - Stochastic mechanics

25 Chapter 2 Lévy processes 2.1 Introduction In this chapter we give a brief review about Lévy processes, focusing on their properties that will be of some utilities in the following. We will start with their definition, and with the definition of the associated Lévy measures. Then we will describe the infinite divisible distributions, in particular the stable and self-decomposable ones, and we will show the one-to-one correspondence between Lévy processes and these distributions. After some example of Lévy processes, we give the Lévy-Khintchine formula and the strictly related Lévy- Itô decomposition, which decompose a Lévy process in a sum of two independent processes, one with continuous path and the other with pure-jump path. Then, after remarking that Lévy processes are Markov processes, we focus our attention on the semigroups and the infinitesimal generators associated to them, in particular for the symmetric ones. Finally we give a brief review of semimartingales (every Lévy process is a semimartingale) defining their Itô integrals and the relative SDEs. 2.2 Definitions Definition Let (Ω, F, (F t ) t 0, P) a filtered complete probability space. A process (Z t ) t 0 on R n adapted to (F t ) t 0 is said a Lévy process if 1. Z 0 = 0 a.s. 2. Z t Z s is independent from F s, for 0 s t.

26 26 Chap. 2 - Lévy processes 3. Z t+h Z t = Z s+h Z s in distribution, for every 0 s, t, h. 4. Z t is stochastically continuous, i.e. Z t+ t Z t 0 in probability as t 0. Remark A process Z t satisfying just conditions 1-3 is said an additive process. It also possible to define a Lévy process without involving a filtration. Definition Let (Ω, F, P) a complete probability space. Then a process (Z t ) t 0 is said an intrinsic Lévy process if 1. Z 0 = 0 a.s. 2. Z t Z s is independent from Z v Z u, for 0 s t, 0 u v, and if [s, t] [u, v] =. 3. Z t+h Z t = Z s+h Z s in distribution, for every 0 s, t, h. 4. Z t is stochastically continuous, i.e. Z t+ t Z t 0 in probability as t 0. It is straightforward to see that an intrinsic Lévy process is also a Lévy process; in fact if (Z t ) t 0 is an intrinsic Lévy process, then given the filtrations F t := {Z s : 0 s t} and F t := F t N, where N is the σ-algebre of the sets of measure zero w.r.t. P, it results that (Z t ) t 0 is also a Lévy process w.r.t. both F t and F t. Remarks The filtration (F t ) t 0 is often called the natural filtration associated to (Z t ) t 0. Ft is the σ-algebra generated by F t when it is completed with the sets of measure zero. It can be proves that the filtration ( F t ) t 0 is right continuous, i.e. Ft = u>t F u. A right continuous filtration is very common, for example the natural filtrations associated to a large class of Markov process (once completed with the sets of measure zero) are of this type; besides, when a filtration is right continuous a lot of useful properties are satisfied. For this reason in the following we will tacitly assume that the filtrations contains all the sets of measure zero and are right continuous. From this definitions it s possible to prove [54, 59] that for every Lévy process Z t there is a version 1 of it which has cad lag paths, i.e. trajectories right continuous with left limits, P-a.s., and which is still a Lévy process. In the following we always assume that all Lévy processes are cad-lag. 1 A process (Z t) t 0 is a version of another process (Z t) t 0 if Z t = Z t P-a.s. for every t 0

27 2.3. LÉVY MEASURES Lévy measures Since Lévy processes have in general non-continuous paths, it would be interesting to study the distribution of their jumps, i.e. the discontinuities of their trajectories. These information can be obtained by particular measures derived from Lévy processes, called Lévy measures. Actually a Lévy measure can be defined independently from a Lévy process; let l be a positive measure on the measurable space 2 (R n 0, B(Rn 0 )), for n N 0. Then l will be called a Lévy measure if ( y 2 1 ) l(dy) < (2.3.1) R n 0 (a b := min{a, b}). Lévy measures are directly related to the Lévy processes first of all because they are connected to the average number of jumps performed by the process in a unit time interval. Let (Ω, F, P) be a probability space, (X t ) t 0 a generic cad-lag process with value in R n and ( t X) t 0 the jump-process associated to X t, i.e. t X := X t X t where X t := lim s t X s For a given Λ B(R n ) with 0 / Λ, we can define a new process Nt Λ = 0 and for t > 0 N Λ 0 so defined: N Λ t (ω) := # of jumps in (0, t] with size s X(ω) Λ (2.3.2) Another way to write the definition of Nt Λ is given by the following formula = 1 Λ ( s X) (2.3.3) N Λ t 0<s t where the sum is intended for the instants of time in which there is a jump of X t. Since Nt Λ is cad-lag, (2.3.2) and (2.3.3) are correctly defined because then, in a finite time interval, the number of jumps greater than an arbitrary ε > 0 in absolute value is always finite [38]. Nt Λ has the following properties: N0 Λ = 0 as said, it is positive, increasing and takes integer values. For these reasons it is called a counting process. Besides, in the case of a Lévy processes we have the following proposition: 2 With the symbol R n 0 or N 0 we indicate respectively the sets R n and N without the element zero.

28 28 Chap. 2 - Lévy processes Proposition If X t = Z t is a Lévy process, then the process Nt Λ Poisson process with Poisson parameter E{N1 Λ}. is a Proof. See [54] The Poisson parameter E{N1 Λ } shows up to be an additive set function of Λ which can be extended to a measure on the measurable space (R n 0, B(Rn 0 )). It can be proved that this measure satisfies (2.3.1) and then it is a Lévy measure l that we consider as the Lévy measure associated to Z t. Remarks Toghether with l we can define other measures on the space (R n 0, B(Rn 0 )); for example, for a fixed ω Ω, t 0, at varying of Λ B(R n ), 0 / Λ, the function N t (ω; ) : Λ N Λ t (ω) R (2.3.4) is another additive set function that can be extended to a measure on the space (R n 0, B(Rn 0 )). Other measures, but on (Rn 0 R +, B(R n 0 R +)), are derived for a fixed ω Ω from the functions N(ω;, ) : Λ (0, t] N Λ t (ω) R (2.3.5) These measures are called random measures because of their dependence from ω. In the following we will indicate them as N t and N, omitting their dependence from ω in order to simplify the notation, when no ambiguity can occur. Note that for positive and measurable functions f in R n, and Λ B(R n ) with 0 / Λ we can write f(y)n t (dy) = f(z s )1 Λ ( s Z) (2.3.6) Λ 0<s t while if f is a B(R n R + ) measurable function, then for the same Λ as above t 0 Λ f(y, s)n(dy, ds) = 0<s t f(z s, s)1 Λ ( s Z) (2.3.7) Since N Λ t is a Poisson process with parameter E{N1 Λ } = l(λ), we have E{N Λ t } = tl(λ) E{[N Λ t tl(λ)] 2 } = tl(λ)

29 2.3 - Lévy measures 29 that, in view of (2.3.3) and of (2.3.6), can be rewritten as { E 0<s t { } } 1 Λ ( s Z) = E 1 Λ (y)n t (dy) = t 1 Λ (y)l(dy) R n 0 R n 0 and [ E R n 0 ] 2 1 Λ (y)n t (dy) t 1 Λ (y)l(dy) R n = t 1 Λ (y)l(dy) 0 R n 0 These relations extend naturally from 1 Λ to every measurable function f in R n satisfying some mild conditions: Proposition For a Borel subset Λ with 0 / Λ and a function f on R n such that f1 Λ L 1 (R n 0, l(dy)) we have { E 0<s t f(z s )1 { Zs Λ} } { } = E f(y)n t (dy) = t f(y)l(dy) (2.3.8) Λ R n 0 and if f1 Λ L 2 (R n 0, l(dy)), then [ ] 2 E f(y)n t (dy) t f(y)l(dy) R n = t f 2 (y)l(dy) (2.3.9) 0 R n 0 R n 0 Proof. For a complete proof see [54], Chap. I. Another important property about Lévy measures is, given α > 0, that E{ Z t α } <, t 0 y α l(dy) < (2.3.10) y 1 when l is the Lévy measure associated to Z t [59]. As a corollary of this last property we have that if Z t has bounded jumps, then it has finite moments of any order. In this contest it is useful to note that by subtracting jumps higher than a certain quantity from a Lévy process one still obtain a Lévy process, as the poiint 2 of the following theorem shows: Theorem Let (Z t ) t 0 a Lévy process, and for Λ B(R n ), 0 / Λ, let (N Λ t ) t 0 the associted Poisson counting process. Then we have

30 30 Chap. 2 - Lévy processes 1. For every function f with finite value in Λ, the process t Λ f(y) N Λ t (dy) is a Lévy process. 2. The process t Z t Λ y N Λ t (dy), i.e. the process (Z t ) t 0 without the jumps with size in Λ, is still a Lévy process. Proof. See [54], Chap. I. 2.4 Lévy-Itô decomposition Every Lévy process can be decompose into the sum of a Wiener and a pure jump process, the last one being a superposition of compensated Poisson processes 3. Besides these two processes are independent. Theorem (Lévy-Itô decomposition). Let (Z t ) t 0 a Lévy process. Then the following decomposition holds Z t = W t + y[n t (dy) tl(dy)] + y 1 { te Z 1 y >1 yn 1 (dy) = W t + y[n t (dy) tl(dy)] + y 1 αt + 0<s t } + yn t (dy) y >1 Z s 1 { Zs >1} (2.4.1) where (W t ) t 0 is a Brownian motion, α = E{Z 1 yn 1 (dy)} and, for any set Λ, 0 / Λ the Poisson process (N Λ t ) t 0 is independent from (W t ) t 0. Besides N Λ t is independent from N Γ t if Λ and Γ are disjoint. Proof. For a complete proof see [59], Chap. IV. We just note that for point 2 of Theorem and for (2.3.10), the random variable Z 1 Y 1 y N t Λ (dy) is integrable. As a corollary of Theorem we have that the only Lévy process with continuous path is the Wiener process plus eventually a deterministic drift. Another theorem strictly connected with the Lévy-Itô decomposition is this: 3 If X t is a Poisson process with parameter λ, then the process X t tλ is a martingale and it is called compensated Poisson process.

31 2.5 - Infinite divisible distributions 31 Theorem Every Lévy process (Z t ) t 0 can be decompose in a sum Z t = M t + A t, where (M t ) t 0 and (A t ) t 0 are both Lévy processes; (M t ) t 0 is a martingale with bounded jumps, M t L p (Ω), p 1, while (A t ) t 0 is an adapted process with path of finite variation on compacts. Proof. See [54] for a detailed proof. Obviously from (2.4.1) we will have M t := W t + y[n t (dy) tl(dy)], A t := αt + yn t (dy) y 0 y >1 Lévy-Itô decomposition is essentially equivalent to the Lévy-Khintchine formula, that give the explicit expression of the characteristic function ˆν t (u) := E{e iuzt } (2.4.2) of a Lévy process (Z t ) t 0 with distribution ν t of Z t. We will see this in the next section in connection with the infinite divisible distributions. 2.5 Infinite divisible distributions A way to construct Lévy processes is starting from the infinite divisible distributions (idd), i.e. those distribution functions ν on R n such that for every m N there exists a new distribution ν m satisfying the following relation ν = ν m m := ν m... ν m }{{} m times (2.5.1) where the symbol stays for the convolution product between measures. There is in fact a one-to one correspondence between infinite divisible distributions and Lèvy processes laws. This relation can be better seen if we consider the characteristic functions associated to distributions and defined as [61] ˆν : z R n e iz x ν(dx) R n When ν is an idd, then it s straightforward to see from (2.5.1) that for every n N the function ˆν 1/n is still a characteristic function; actually this is true for every t R + in place of 1/n[59]. Then the link between Lévy processes and idd is given by the following proposition.

32 32 Chap. 2 - Lévy processes Proposition There is a one-to one correspondence between idd s and Lévy processes; in particular if (Z t ) t 0 is a Lévy process on R n, then the distribution function of Z t is an idd, for every t 0, and called ˆν t the characteristic functions associated to Z t we have ˆν t = ˆν t 1 (2.5.2) On the other side, if ν is an idd on R n, there is a unique, up to identity in law 4 Lévy process (Z t ) t 0 such that Z 1 has ν as a distribution function. Proof. A complete proof of this proposition can be found in [59], Chap. II. Here we limited to a sketch of it. Let (Z t ) t 0 be a Lévy process and t 0; then for every m N, fixed h = t/m, we can write m 1 Z t = (Z h(k+1) Z hk ) k=0 So we can express Z t as a sum of m independent and identical distributed random variables, respectively for properties 3 and 4 of definition (2.2.1). So by (2.4.2) the characteristic function ˆν t of Z t will be given by ˆν t = ˆν m t/m (2.5.3) with ˆν t/m the characteristic function of Z t/m = Z h. This is enough to prove that the distribution function of Z t is infinitely divisible. Besides, from(2.5.3), for t Q +, i.e. t = n/m with n and m positive integers, we have for every m N ˆν t = ˆν n/m = ˆν 1/m n = (ˆν n 1 ) 1/m = ˆν t 1 Then, by means of stochastic continuity, it s possible to extend the above relation for every t 0. As for the reversed implication, given an idd ν, we first define the following family of probability measures: P 0 : B B(R n ) δ 0 (B) P t1,...,t m : B 1... B m B(R mn ) dν (tm t m 1) dν t 1 B 1... B m 4 Two processes (X t) t 0 and (Y t) t 0 are said identical in law if for every m N 0 and for every 0 t 1... t m the random vectors (X t1,..., X tm ) and (Y t1,..., Y tm ) have the same distribution. Denoting the two processes with X and Y, the identity in law is often indicated as X d = Y.

33 2.5 - Infinite divisible distributions 33 for every m N and 0 t 1... t m, where δ 0 is the Dirac measure while with ν t we intend the distribution function associated to the characteristic function ˆν t. Then by the Kolmogorov extension theorem [38] from these measures can be defined a probability measure P on the measurable space ((R n ) [0, ), B((R n ) [0, ) )), and in this probability space the coordinate process 5 is a Lévy process with P Z1 = ν. Idd s, and consequently Lévy processes, are very well characterized. For their specification few parameters are needed, as the following Lévy-Khintchine formula makes clear: Theorem Let ν be an idd on R n. Then ˆν(z) = exp { 12 z az + iγ z + [ e iz y 1 iz y1 { y 1} (y) ] } l(dy) R n 0 (2.5.4) where a is a symmetric nonnegative-definite n n matrix, γ R n and l is a Lévy measure. This representation of ˆν by a, l and γ is unique. Besides, if a is a symmetric nonnegative-definite n n matrix, γ R n and l is a Lévy measure, then (2.5.4) is the characteristic function of an idd. Proof. See [59], pag. 37. Definition We call (a, l, γ) the generating triplet of ν, or of (Z t ) t 0, the Lévy process associated to ν. Remarks The function 1 y 1 in (2.5.4) can be modified in different ways. For example by any measurable and bounded function c : R n R such that c(y) = 1 + o( y ) as y 0 c(y) = O(1/ y ) as y Then (2.5.4) is rewritten as { ˆν(z) = exp 1 2 z az + iγ c z + R n 0 [ e iz y 1 iz yc(y) ] l(dy) 5 The coordinate process (Z t) t 0 on ((R n ) [0, ), B((R n ) [0, ) )) is a process that to every function x (R n ) [0, ), i.e. to every associates Z t(x) := x(t). x : t [0, ) x(t) R n }

34 34 Chap. 2 - Lévy processes where γ c := γ + y[c(y) 1 { y 1} (y)] l(dy) R n 0 In this case the new generating triplet will be indicated with (a, l, γ c ) c. Actually, if l is symmetric, i.e. l(b) = l( B) for every set B B(R n 0 ), then (2.5.4) can be rewritten as ˆν(z) = exp { 12 } z az + iγ z + [cos(z y) 1] l(dy) Stable and self-decomposable distributions There are two particular classes of idd s, the stable and the self-decomposable ones. We define them now. Definition Let ν a distribution function on R n. ν is said stable if for every a > 0 there exists b > 0 such that. R n 0 ˆν(az) = ˆν(z) b, z R n (2.5.5) Remark Stable distributions are idd s. The corresponding Lévy processes are called stable Lévy processes. Examples of stable processes are given, as one can trivially verify, by the Wiener and the Cauchy processes. The Cauchy process with parameters γ R n and c > 0 is the process generated by the following idd in R n : ( ) c n + 1 ν(b) := π (n+1)/2 Γ 2 B 1 ( x γ 2 + c 2 ) (n+1)/2 dx B B(Rn ) where Γ is the Euler function. For the characteristic function we have ˆν(z) = e c z +iγ z If (Z t ) t 0 is a stable process, then it s easy to see that for every a > 0 there exists b > 0 such that Z at d = bzt (2.5.6) for every t 0. A process that verifies relation (2.5.6) is called self-similar; however stability and self-similarity are not exactly the same concept, since

35 2.5 - Infinite divisible distributions 35 there are self-similar processes that are not even Lévy processes (see [59]). Remark also that there is a fixed relation between the coefficient a and b of (2.5.5). In particular it results that exists a coefficient 0 < α 2 such that b = a 1/α α is said Hurst index of the distribution or Hurst index of the process. For a Wiener process α = 2, while for the Cauchy one α = 1. However the value α = 1 is also related to the degenerate Lévy process Z t = vt, with v R n. The Lévy-Khintchin formula for a symmetric and rotationally invariant 6 stable distribution function in R n is very simple: for 0 < α < 2 the generating triplet is (0, l, 0) with the Lévy measure l(dy) = dy y α+n (2.5.7) Remark that here l is absolutely continuous. After some calculation we have for the characteristic function ˆν(z) = e c z α (2.5.8) with c > 0. When α = 2 the distribution function is a Gaussian distribution, so that the generating triplet is (a, 0, 0) and the Lévy-Khintchin formula is ˆν(z) = e z az/2 We consider next the family of the self-decomposable distribution functions. Definition Let ν be a distribution function on R n : ν is said selfdecomposable if for every b 1 there exists a probability measure ρ b on R n such that ˆν(z) = ˆν(b 1 z)ˆρ b (z) where ˆρ b is the characteristic function of ρ b. Remark Every stable distribution function is also self-decomposable: indeed, given b > 1, we have ˆν (z) = ˆν ( bb 1 z ) = [ˆν ( b 1 z )] b 1/α = ˆν ( b 1 z ) ˆρ b (z) 6 A distribution ν in R n is rotationally invariant if ν(b) = ν(ub) for every Borel set B and every orthogonal transformation U in R n.

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos

Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos N Cufaro Petroni: CYCLOTRONS 2007 Giardini Naxos, 1 5 October, 2007 1 Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos CYCLOTRONS 2007 Giardini Naxos, 1 5 October Nicola

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

Stochastic Mechanics of Particles and Fields

Stochastic Mechanics of Particles and Fields Stochastic Mechanics of Particles and Fields Edward Nelson Department of Mathematics, Princeton University These slides are posted at http://math.princeton.edu/ nelson/papers/xsmpf.pdf A preliminary draft

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Densities for the Navier Stokes equations with noise

Densities for the Navier Stokes equations with noise Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Week 9 Generators, duality, change of measure

Week 9 Generators, duality, change of measure Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov

More information

Local vs. Nonlocal Diffusions A Tale of Two Laplacians

Local vs. Nonlocal Diffusions A Tale of Two Laplacians Local vs. Nonlocal Diffusions A Tale of Two Laplacians Jinqiao Duan Dept of Applied Mathematics Illinois Institute of Technology Chicago duan@iit.edu Outline 1 Einstein & Wiener: The Local diffusion 2

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Infinitely divisible distributions and the Lévy-Khintchine formula

Infinitely divisible distributions and the Lévy-Khintchine formula Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Contents. 1 Preliminaries 3. Martingales

Contents. 1 Preliminaries 3. Martingales Table of Preface PART I THE FUNDAMENTAL PRINCIPLES page xv 1 Preliminaries 3 2 Martingales 9 2.1 Martingales and examples 9 2.2 Stopping times 12 2.3 The maximum inequality 13 2.4 Doob s inequality 14

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

Homogenization with stochastic differential equations

Homogenization with stochastic differential equations Homogenization with stochastic differential equations Scott Hottovy shottovy@math.arizona.edu University of Arizona Program in Applied Mathematics October 12, 2011 Modeling with SDE Use SDE to model system

More information

Nelson s early work on probability

Nelson s early work on probability Nelson s early work on probability William G. Faris November 24, 2016; revised September 7, 2017 1 Introduction Nelson s early work on probability treated both general stochastic processes and the particular

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

Harnack Inequalities and Applications for Stochastic Equations

Harnack Inequalities and Applications for Stochastic Equations p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY PROCESSES WITH INDEPENDENT INCREMENTS DAMIR FILIPOVIĆ AND STEFAN TAPPE Abstract. This article considers infinite dimensional stochastic differential equations

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Convoluted Brownian motions: a class of remarkable Gaussian processes

Convoluted Brownian motions: a class of remarkable Gaussian processes Convoluted Brownian motions: a class of remarkable Gaussian processes Sylvie Roelly Random models with applications in the natural sciences Bogotá, December 11-15, 217 S. Roelly (Universität Potsdam) 1

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

An Introduction to Malliavin Calculus. Denis Bell University of North Florida

An Introduction to Malliavin Calculus. Denis Bell University of North Florida An Introduction to Malliavin Calculus Denis Bell University of North Florida Motivation - the hypoellipticity problem Definition. A differential operator G is hypoelliptic if, whenever the equation Gu

More information

Kolmogorov equations in Hilbert spaces IV

Kolmogorov equations in Hilbert spaces IV March 26, 2010 Other types of equations Let us consider the Burgers equation in = L 2 (0, 1) dx(t) = (AX(t) + b(x(t))dt + dw (t) X(0) = x, (19) where A = ξ 2, D(A) = 2 (0, 1) 0 1 (0, 1), b(x) = ξ 2 (x

More information

Rough Burgers-like equations with multiplicative noise

Rough Burgers-like equations with multiplicative noise Rough Burgers-like equations with multiplicative noise Martin Hairer Hendrik Weber Mathematics Institute University of Warwick Bielefeld, 3.11.21 Burgers-like equation Aim: Existence/Uniqueness for du

More information

Lecture Introduction

Lecture Introduction Lecture 1 1.1 Introduction The theory of Partial Differential Equations (PDEs) is central to mathematics, both pure and applied. The main difference between the theory of PDEs and the theory of Ordinary

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS DAMIR FILIPOVIĆ, STFAN TAPP, AND JOSF TICHMANN Abstract. We provide necessary and sufficient conditions for stochastic invariance of finite dimensional

More information

Study of Dependence for Some Stochastic Processes

Study of Dependence for Some Stochastic Processes Study of Dependence for Some Stochastic Processes Tomasz R. Bielecki, Andrea Vidozzi, Luca Vidozzi Department of Applied Mathematics Illinois Institute of Technology Chicago, IL 6616, USA Jacek Jakubowski

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

PRESENT STATE AND FUTURE PROSPECTS OF STOCHASTIC PROCESS THEORY

PRESENT STATE AND FUTURE PROSPECTS OF STOCHASTIC PROCESS THEORY PRESENT STATE AND FUTURE PROSPECTS OF STOCHASTIC PROCESS THEORY J. L. DOOB The theory of stochastic processes has developed sufficiently in the past two decades so that one can now properly give a survey

More information

On the Converse Law of Large Numbers

On the Converse Law of Large Numbers On the Converse Law of Large Numbers H. Jerome Keisler Yeneng Sun This version: March 15, 2018 Abstract Given a triangular array of random variables and a growth rate without a full upper asymptotic density,

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Anatoliy Swishchuk Department of Mathematics and Statistics University of Calgary Calgary, Alberta, Canada Lunch at the Lab Talk

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Stochastic Partial Differential Equations with Levy Noise

Stochastic Partial Differential Equations with Levy Noise Stochastic Partial Differential Equations with Levy Noise An Evolution Equation Approach S..PESZAT and J. ZABCZYK Institute of Mathematics, Polish Academy of Sciences' CAMBRIDGE UNIVERSITY PRESS Contents

More information

Stochastic Analysis. Prof. Dr. Andreas Eberle

Stochastic Analysis. Prof. Dr. Andreas Eberle Stochastic Analysis Prof. Dr. Andreas Eberle March 13, 212 Contents Contents 2 1 Lévy processes and Poisson point processes 6 1.1 Lévy processes.............................. 7 Characteristic exponents.........................

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle?

The Smoluchowski-Kramers Approximation: What model describes a Brownian particle? The Smoluchowski-Kramers Approximation: What model describes a Brownian particle? Scott Hottovy shottovy@math.arizona.edu University of Arizona Applied Mathematics October 7, 2011 Brown observes a particle

More information