Some classical results on stationary distributions of continuous time Markov processes

Size: px
Start display at page:

Download "Some classical results on stationary distributions of continuous time Markov processes"

Transcription

1 Some classical results on stationary distributions of continuous time Markov processes Chris Janjigian March 25, 24 These presentation notes are for my talk in the graduate probability seminar at UW Madison in Spring 24. The topics covered are some classical results on the existence, uniqueness, and regularity of stationary distributions for Markov processes on nice state spaces. These notes are primarily based on lecture notes by Martin Hairer [5] [6]. The literature on each of these three subjects is massive, so I make no attempt at completeness. The big picture If we ignore regularity of the Markov process, it isn t too hard to guess what the relationship between topological properties and measure theoretic properties of a Markov process should be. Compactness properties give existence of the stationary distribution and irreducibility properties along with an analogue of aperiodicity give uniqueness. Working on a general state space makes the technical conditions more difficult to satisfy and provide some interesting counterexamples. Finite dimensional discrete time Markov Chains As is often the case, we can get some good intuition about the behavior of nicely behaved Markov processes by considering the finite dimensional, discrete time case. We start by recalling the topological properties which give results on existence and uniqueness of stationary distributions.

2 Existence Call the Markov transition matrix P and fix any initial distribution µ. If the state space is finite we equip it with the discrete topology, so that the sequence of measures µ P n is tight. We would like to say that µ P n µ for some µ, but of course this is not true in general. As is often the case in analysis a little averaging helps here, so instead consider µ N = N N n= µ P n, which is also clearly tight. Take a subsequence N k along which µ Nk µ and consider a test function f µ P µ, f = lim N k = lim N k = lim N k ( N k N k n= µ P n ) P, f µ (P Nk+ P ), f N k 2 f N k ( N k N k n= µ P n ) A more standard proof of this result would be to explicitly write down a stationary distribution. If x is a recurrent state for the Markov Chain, then we can define a stationary measure by fixing x and defining the mass of y to be the expected number of visits to y before returning to x, µ x (y) = E x [ T x i= {Xi =y}]. There is a generalization of this for recurrent diffusions. See Lemma 2. and Theorem 2. in [7]. Remark. It is actually not hard to show that this Cesaró average N N n= P n converges without taking subsequences and the limit of N N n= p(n) (x, y) is the stationary distribution that we would have written down explicitly., f Uniqueness I will not go into the standard uniqueness proof for Markov chains, but the basic idea is that if a stationary measure exists then uniqueness follows from irreducibility. The additional assumption of aperiodicity implies that the transition probabilities actually converge to the invariant distribution. When we move away from the discrete setting the analogue of aperiodicity (mixing) becomes relevant again. Ergodicity corresponds to the chain being irreducible and if the chain is reducible, then we can just restrict our attention to the irreducible components. The difference is that on a discrete space, we can throw away states that have probability 2

3 zero until we get to a chain where all states have positive probability. In a continuous setting, that no longer works, so we need to introduce stronger assumptions. Preliminaries The study of Markov processes is a little overrun with notation, so to spare time, I am going to introduce as little as possible. There are some technical issues in the notes that I based this on. I will assume away as many of them as possible. The gist here is that we want an abstract way to handle the operator f( ) E [f(x(t))] where X is a Markov process. Part of the issue here is that there really isn t much hope of getting a sharp characterization of when this map will make sense without looking at a particular model. A Markov operator over a Polish E space is a bounded linear operator P : B b (E) B b (E) so that. P = 2. If ϕ then P ϕ 3. P ϕ ϕ 4. If ϕ n B b (E) is a uniformly bounded sequence with ϕ n ϕ B b (E) pointwise, then P ϕ n P ϕ pointwise. Remark 2. This definition deviates from [5] in two ways. First, I include the contraction property as I could not see how to prove it without some extra regularity. Second, property 4.) has the added assumption of uniform boundedness, since I do not think that the usual definition of Markov transition probabilities satisfies the corresponding definition in [5]. As a comment, this is equivalent to the construction of a Markov transition kernel via the identification P ( A )(x) = P (x, A). We may define the action of a Markov operator P on a measure µ via P µ(a) = E P ( A)(x)µ(dx). By density this extends to P µ(ϕ) = E P ϕ(x)µ(dx). A family of Markov operators indexed by time, (P t ) t is a Markov semi-group if P t P s = P t+s and for all ϕ B b (E), the map (t, x) P t ϕ(x) is Borel measurable in the prodcut sigma algebra. Remark 3. [5] does not include the measurability requirement, but I cannot see how to prove that the integrals in the first proof of the next section make sense without it. 3

4 We can verify that the usual definition of Markov transition probabilities agrees with this one via the identifiction E x [f(x(t))] = P t (f)(x) And to see that it is a semigroup, observe that by the Markov property P t+s (f)(x) = E x [f(x(t + s))] = E x [E[f(X(t + s)) F s ]] = E x [E Xs f(x(t))] = E x [P t (f)(x(s))] = P s P t (f)(x) We will say that a Markov semigroup (P t ) t is Feller if t and ϕ C b (E) we have P t ϕ C b (E). Finally, a probability measure µ is stationary for the semi-group (P t ) t if P t µ = µ for all t. Existence Necessary and sufficient condition in a Polish space On sufficiently nice cases there is an exact characterization of when a stationary distribution exists. As in the finite dimensional discrete time case, the important fact is compactness. We will essentially repeat the same proof as before in this case. Theorem 4. (Kyrlov-Bogolioubov)[5] Let (P t ) t be a Feller-Markov semigroup over a Polish space E. Then there exists an invariant distribution µ M (E) if and only if there exists µ M (E) such that {P (t)µ } t is tight. Proof. Necessity follows from the definition of stationarity and the fact that in a Polish space, all probability measures are tight. For sufficiency, suppose that µ M (E) such that {P (t)µ } t is tight. Then µ t (A) = t t (P s µ )( A )ds 4

5 is well defined by our measurability assumptions and tight by taking the same compact set. Take any weak limit µ with µ tn ϕ C b (E) (P t µ ) (ϕ) µ (ϕ) = µ (P t ϕ) µ (ϕ) µ and t n. We claim that µ is invariant. Fix = lim µ t n (P t ϕ) µ tn (ϕ) t n tn = lim P s P t ϕ(x)µ (dx)ds t n t n tn = lim P t+s ϕ(x)µ (dx)ds t n t n t+tn = lim P s ϕ(x)µ (dx)ds t n t n t t+tn = lim P s ϕ(x)µ (dx)ds t n t n lim n 2t ϕ t n = We needed Feller continuity in the second line. t tn tn tn t P s ϕ(x)µ(dx)ds P s ϕ(x)µ(dx)ds P s ϕ(x)µ(dx)ds P s ϕ(x)µ(dx)ds Corollary 5. If E is a compact Polish space and (P t ) t is a Feller-Markov semi-group, then there exists a stationary distribution for (P t ) t. If E is not compact, it can be non-trivial to check the tightness condition. In such cases, one can either proceed directly and produce a measure satisfying the tightness conditions or work with some additional structure of the model. Example: reflected Brownian motion with drift It is sometimes possible to prove existence directly from the model without recourse to the abstract construction above. The easiest way is to have the exact distribution of P (t)µ for some measure µ. An example where this is possible is the reflected Brownian Motion with drift. Let µ < and B t be standard Brownian motion. Define dβ = µ sign(β)dt + db t and let Z = β. I am not going to prove that this is genuinely a reflected Brownian motion with drift or that such a process exists (though the SDE solution clearly does). One can use a Girsanov change of measure and the fact that this process satisfies a version of Levy s 5

6 distributional laws for the absolute value of Brownian motion[4] that the distribution of this process started from zero is P (P (t)δ z) = P (Z(t) z) ( ) ( ) z µt z µt = Φ e 2µz Φ t t e 2µz It follows from this that the exponential distribution with parameter 2µ is invariant. Diffusions One can also use the generator of the diffusion directly to show existence of an invariant measure µ with P (t)ϕ(x)µ(dx) = ϕ(x)µ(dx). Defining the generator to be the pair P (t) (L, D(L)), where L = lim t and D(L) is the set of functions in our domain for t which this limit makes sense. One can show the previous equation holds if and only if Lϕ(x)µ(dx) = for all ϕ for which the limit makes sense. It can be shown that if the multidimensional diffusion X satisfies dx = b(x)dt + σ(x)db t for Lipschitz b and σ, the generator of X is given by L = i,j d a ij (x) ij + i d b i (x) i where ( 2 σσt ) ij = a ij. Gibbs measures for gradient systems The next result generalizes quite a bit. It is an exact formula for the stationary distribution of diffusions of the form dx = Φ(X)dt + 2 β db t Theorem 6. Suppose that L is the generator of a Markov process and is of the form L = β Φ where Φ C (R d ) and b C(R d )) and suppose that e βφ(x) is integrable. Then µ = Z β e βφ(x) dx is stationary. 6

7 Proof. (mostly) It is necessary and sufficient to show that µ(dx) = Z β e βφ(x) dx satisfies Lϕ(x)µ(dx) = for all ϕ D(L). It is true, but not obvious that it suffices to check this R d condition on Cc (R d )[2]. The key observation to this proof is that Lϕ = β eβφ div ( e βφ ϕ ) Since we assumed that e β L we know that Z β < and since ϕ Cc that β eβφ div ( e βφ ϕ ) is integrable with respect to µ. Then Lϕdµ = div ( e βφ(x) ϕ(x) ) dx R βz d β R d = (R d ), we know where the last equality follows from the divergence theorem. Lyapunov functions for diffusions Theorem 7. (Khasminskii, 96)[8] Suppose that L is of the form above, then a sufficient condition for the existence of an invariant probability measure is that there exists a C 2 (R d ) (Lyapunov) function V (x) with the proprties lim V (x) = x lim LV (x) = x Remark 8. In this setting there are some domain issues to deal with, which I ignore. Proof. (sketch based on [2]) For simplicity, we will assume that transition densities exist. The idea of this proof is to show that P (t)v (x) V (x) T T (without loss of generality) that V (x) then this implies that T P (t)lv (x)dt. If we assume V (x) T T T P (t)lv (x)dt. Recalling that P (t)lv (x) = p(t, x, y)lv (y)dy R d we can observe that LV M uniformly for some M. Moreover, for each ɛ > there is a compact set K ɛ so that LV (x). Then recall that ɛ T p(t, x, y)lv (y)dydt = T p(t, x, y)lv (y)dydt T R T d K ɛ + T p(t, x, y)lv (y)dydt T Kɛ c M T p(t, x, y)dydt ɛ T 7 K c ɛ

8 It then follows that T T K c ɛ which implies that the measures A T ( p(t, x, y)dydt ɛ M + V (x) ) T T A p(t, x, y)dydt are tight. Remark 9. Using Lyapunov functions to control recurrence properties (which essentially gives you a stationary measure) goes back to at least the 95s, when Foster [3] showed that the existence of a discrete Lyapunov function implied recurrence properties of a Markov chain. Examples Non-compact with no stationary distribution The classical examples of a continuous time process with no stationary distribution is standard Brownian motion. This is an explicit computation using the known semigroup. Compact state space, but no stationary distribution Such a process has to look odd, since it cannot be Feller. This example comes from Hairer [5]. Let our state space be [, ] with transition probabilities P (x, ) = δ x 2 {x>} + δ {x=}. We can see that µ({}) = for any invariant measure, from which it follows that µ({ 2, ]}) = and then inductively that µ([, ]) =. Lyapunov function for a diffusion Consider the Ornstein-Uhlenbeck process given by then the generator of X is dx t = X t dt + 2dB t L = x d dx + d2 dx 2 Take a smooth function V (x) which is equal to e x near x =. Then clearly lim x V (x) = and lim x LV (x) =. Therefore a stationary distribution exists. Notice that this (being one dimensional with a continuous drift) is also a gradient diffusion and therefore it has a stationary measure proportional to e xdx = e 2 x2. 8

9 Uniquness Ergodic theorems Proofs of uniqueness for invariant measures of Markov chains and processes tend to rely quite heavily on ergodic theorems. The key fact is the observation that the set of invariant probability measures for a dynamical system forms a convex set and that the extreme points are the ergodic measures. The proof of this result is not terribly hard, but it is tedious and I will omit it. I will call this result the structure theorem. We need to fit Markov chains into the framework of dynamical systems to use the ergodic theory (and define what it means to be ergodic). Definition. Let T be N, Z, R or R +. A dynamical system on a Polish space E is a collection of maps {θ t } t T with θ t : E E and the properties. (t, ω) θ t (ω) is jointly measurable 2. θ t θ s = θ t+s A measure µ is said to be invariant for θ t if the pushforward θ t µ = µ for all t. The invariant sigma algebra of a dynamical system is I(θ) = {A B E : θ t (A) = A t T }. We recall Birkhoff s ergodic theorem Theorem. (Birkhoff) Let θ t be a dynamical system and let µ be invariant for θ t, then for each f L (dµ) and µ almost every x E lim N N N f(θ n (x)) = E µ [f I(θ)](x) n= This theorem immediately implies the main result that we can use to prove the uniqueness of invariant measures. Notice that there is a unique stationary distribution if and only if the set of invariant probability measures is a singleton. In this case, the measure must be ergodic, by the structure theorem. We call a measure ergodic for θ t if µ(a) {, } for all A I(θ). If µ is ergodic for θ, then for each f and µ almost every x we have E µ [f I(θ)](x) = E µ [f] Corollary 2. If µ and µ 2 are both ergodic for θ t, then they are mutually singular. 9

10 Proof. If µ µ 2 there is a bounded measurable function f with E µ [f] E µ2 [f]. But for each i, we have µ i there is a set Ω i with µ i (Ω i ) = and for all x Ω i lim N N N f(θ n (x)) = E µi [f] from which it follows Ω Ω 2 =, so that µ and µ 2 are singular. n= Our goal in proving uniqueness is to take this measure theoretic result and turn it into a pointwise result. In paricular, we want to show that the sets of measure one in the previous results actually have a point in common. This is difficult, since one cannot generally infer pointwise information from measure theoretic information. The main idea for the general uniqueness result will be to show that a little extra regularity on our Markov processes implies that this measure theoretic condition extends to a topological condition. Markov processes as dynamical systems To use the theory in the previous section, we need to construct a dynamical system with the property that being invariant for this dynamical system is the same as being stationary for the Markov process. Before doing this in generality, let s start with an example Lemma 3. (SLLN) Suppose that X i is an iid sequence of real valued random variables and E[ X i ] <. Then n n i= X i E[X ]. Proof. Then since R is Polish we may apply Kolmogorov s extension theorem to realize this sequence on a single probability space ( R N, B N R, P N) where P law of X. If f(x) = x is the projection to the first coordinate, then E P N[ f ] < by hypothesis. Let θ i be the shift map: if x = (x, x 2,... ) then θ i (x) = (x i+, x i+2,... ). Then for each x R N n n x i = n f(θ i x) n i= i= E[f I(θ)](x) but the invariant σ- algebra of θ is a subset of the tail σ-algebra, which is trivial by the Kolmogorov law, so for P RN almost every x R N we have E P R N [f I(θ)](x) = E P RN [f] = E[X ]. The key idea behind this proof is to realize the entire stochastic process on a single product probability and then have our dynamical system be the shift map.

11 Formally, given a Markov semigroup P t over a Polish space E and an invariant measure for P t, we can construct the probability space (E T, B T, P µ ) where P µ is the measure obtained from the extension theorem applied to the finite dimensional distributions on t < t 2 <... t n given on cylinder sets by the following expression, where P µ (A t A tn ) really denotes the measure of the set where all but the t i projections are all of E P µ (A t A tn ) =... P tn tn (x n, dx n )... P t2 t (x, dx 2 )µ(dx ) A t A tn These specifications are consistent by the Markov property, so generate a measure and it is not hard to see that since µ is invariant P µ is invariant for the shift map θ t x(s) = x(t + s). This now lets us define ergodicity for Markov processes: µ is ergodic for P t if and only if P µ is ergodic for θ t. There is a bit of measure theory that needs to be done to explain how this translates to the original space. It is handled in [5]. Return to reflected Brownian motion There is some general theory for showing uniqueness of stationary distributions, but first I want to go over an example where we can show it directly. This example also serves to highlight the connection between regularity and uniqueness. The goal here will be to show that any invariant measure is equivalent to a single reference measure (for example, the Lebesgue measure). This will imply that any two invariant measures must also be equivalent and therefore they are equal. Recall that we have the exact formula ( ) ( ) z µt z µt P (Z(t) z) = Φ e 2µz Φ t t A similar formula exists for general x and it is not hard to see from this that P x (Z(t) A) = iff m(a) =, where m is Lebesgue. In particular then, if π is an invariant measure, then π(a) = P x (Z() A)π(dx) R + Since for every x R +, P x (Z() A) = if and only if m(a) =, the above integrand is zero if and only if m(a) =. It follows that every invariant measure is equivalent to the Lebesgue measure in the sense of absolute continuity and therefore invariant measures are unique.

12 The strong Feller property Typically, if we want to exchange a measure theoretic property which holds on almost every point of a set for one which holds on the entire set, some kind of continuity is necessary. A strong, but not completely unreasonable, condition is the strong Markov property: given a Feller Markov semigroup {P } t, we say that {P } t is strong Feller if P t (B b (E)) C b (E) for all t. Lemma 4. If P t is a strong Feller semigroup over a Polish space E and µ and µ 2 are mutually singular invariant measures for P t then µ and µ 2 have disjoint topological supports. Proof. Let A be a set with µ (A) = and µ 2 (A) =. Then since we have (by invariance) P E t( A )(x)µ i (dx) = µ i (A), we must have P ( A )(x) = for µ almost every x and P ( A )(x) = for µ 2 almost every x. But then by continuity since P t ( A ) is constant of µ i almost everywhere, it is constant with the same value on the topological support of µ i. Thus the topological supports of the µ i must then be disjoint. With this result in hand, we would like nice conditions that allow us to check the strong Feller property and that every x belongs to the topological support of each invariant measure. The most general way I found to check the strong Feller property is actually to check regularity, so I will postpone that. Ideally, we would like to show the first condition by showing that for each x E, there is a t so that for any initial condition we can reach x by time t with positive probability. For discrete models, this argument can often be carried out directly. For models with a continuous state space, it is more subtle. Stroock Varadhan support theorem for diffusions Suppose we are given a stochastic differential equation in Stratonovich form with smooth coefficients dx t = f (X)dt + m f i (X) dω i (t) i= then we have a theorem of Stroock and Varadhan Theorem 5. (Stroock and Varadhan)[5] Given a diffusion of the form above, then the support of P t (x, ) is the closure of the set of points in R d that can be reached in time t by 2

13 solutions to the control problem ẋ(t) = f (x(t)) + x() = x m f i (x(t))u i (t) i= where the u i are arbitrary smooth functions. Proof. (extremely rough sketch; see [] for an actual proof) This result is hard. The idea behind the proof is to use the Cameron Martin theorem to find the support of Brownian motion in an appropriate space of continuous functions, then to push this result forward to the differential equations. The essential difficulty in this scheme lies in the fact that the Ito solution map is extremely discontinuous. The appropriate theory for this version of the proof is called rough paths. Recall that if h lies in the Cameron Martin space W,2 = {h L 2 (, T ) : k L 2 (, T ) with h(t) = t k(s)ds} then the Weiner measure P is absolutely continuous with respect to the measure P h given by P h (A) = P (A h ) where A h = {x h : x A}. Call the support of B in the space of functions we are working on F. From the version of Cameron Martin for this space, we find that if x F, then x h F for each h W,2. If we take a smooth sequence x n converging to x, then we see that F. From this, it follows that C is also in F. The reverse inclusion uses properties of the topology, which I am not introducing. For sufficiently regular h, consider the ordinary differential equation dx h t = f (x h t ) + m f i (x h t )dh i (t) i= Recall that Brownian motion is the uniform limit of piecewise linear functions. If we call these approximating functions B n, then in an appropriate sense solutions to the above equations for x Bn will converge to the solution for x B. Since these functions are all in W,2, we can see that the support of X lies in the support of {x h : h W,2 }. The reverse inclusion is nontrivial and again uses the properties of the topology. The rough paths interpretation of these differential equations agrees with the Stratonovich formulation of the SDE, which is where that requirement comes from. We come to the connection between regularity and uniqueness. A common way to check that the strong Feller property holds is to show that not only is P t a map from B b C b, but 3

14 it is actually a map from B b Cb. This result is due to Hörmander, though probabilistic proofs of it were the main reason for the development of the Malliavin calculus. Regularity Unfortunately, regularity is complicated and I will not be able to go into any detail about the proof of this result. It is something that can be checked on a case-by-case basis though. Hörmander s condition Again, we will work with the Stratonovich formulation of the model, since that is the framework one uses in the proof of this result using the Malliavin calculus. Theorem 6. Suppose that dx t = f (X t )dt + m f i (X t ) db t i= and define vector fields V k by V = {f i : i m} and V k+ = V k {[U, f j ] : U V k, j m} where [, ] is the Lie bracket [V, U](x) = (DV )(x)u(x) (DU)(x)V (x). If the V k are bounded and k span{v k (x)} = R d for each x R d, the semigroup admits smooth transition densities and maps measurable functions into smooth functions. Remark 7. We can translate between Ito and Stratonovich equations in one dimension via dx = a(t, X)dt + b(t, X)dB dx = a(t, X) 2 b(t, X) xb(t, X)dt + b(t, X) db and there is a similar result in higher dimensions. Example 8. Ornstein-Uhlenbeck process (note: this is overkill, since the equation is uniformly elliptic) Let Y (t) be the solution to the Ito form SDE dy = Y dt + db t Y () = x 4

15 Then Y t also solves the Stratonovich equation dy = Y dt + db t In the language above, f (x) = x and f (x) =. Then V = {} and V = {} {[, g] : g {, x}}. [, x] = d x d x = and [, ] =. But since the span of is R for all x, dx dx this satisfies Hörmander s condition. References [] F. Baudoin, Rough paths Unpublished lecture notes [2] S. Fornaro, Regularity properties for second order partial differential operators with unbounded coefficients 24 [3] F.G. Foster, On the stochastic matrices associated with certain queuing processes. Ann. Math. Statist. Volume 24, Number 3 (953), [4] S.E. Graversen and A.N. Shiryaev, An Extension of P. Lévy s Distributional Properties to the Case of a Brownian Motion with Drift. Bernoulli, Vol. 6, No. 4 (Aug., 2), pp [5] M. Hairer, Ergodic theory for Stochastic PDEs. Unpublished lecture notes, 28. [6] M. Hairer, On Malliavin s proof of Hörmander s theorem. Unpublished lecture notes, 2. [7] R.Z. Khasminskii, Ergodic Properties of Recurrent Diffusion Processes and Stabilization of the Solution to the Cauchy Problem for Parabolic Equations. Theory Probab. Appl. Volume V, Number 2 (96), [8] R.Z. Khasminskii, Stochastic Stability of Differential Equations Springer-Verlag, Germany, 2nd Edition, 2. 5

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Kolmogorov equations in Hilbert spaces IV

Kolmogorov equations in Hilbert spaces IV March 26, 2010 Other types of equations Let us consider the Burgers equation in = L 2 (0, 1) dx(t) = (AX(t) + b(x(t))dt + dw (t) X(0) = x, (19) where A = ξ 2, D(A) = 2 (0, 1) 0 1 (0, 1), b(x) = ξ 2 (x

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Fonctions on bounded variations in Hilbert spaces

Fonctions on bounded variations in Hilbert spaces Fonctions on bounded variations in ilbert spaces Newton Institute, March 31, 2010 Introduction We recall that a function u : R n R is said to be of bounded variation (BV) if there exists an n-dimensional

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

An Introduction to Malliavin Calculus. Denis Bell University of North Florida

An Introduction to Malliavin Calculus. Denis Bell University of North Florida An Introduction to Malliavin Calculus Denis Bell University of North Florida Motivation - the hypoellipticity problem Definition. A differential operator G is hypoelliptic if, whenever the equation Gu

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Ergodic theory for Stochastic PDEs

Ergodic theory for Stochastic PDEs Ergodic theory for Stochastic PDEs July 1, 28 M. Hairer Mathematics Institute, The University of Warwick Email: M.Hairer@Warwick.co.uk Contents 1 Introduction 1 2 Definition of a Markov process 2 3 Dynamical

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

BOOK REVIEW. Review by Denis Bell. University of North Florida

BOOK REVIEW. Review by Denis Bell. University of North Florida BOOK REVIEW By Paul Malliavin, Stochastic Analysis. Springer, New York, 1997, 370 pages, $125.00. Review by Denis Bell University of North Florida This book is an exposition of some important topics in

More information

REAL ANALYSIS I Spring 2016 Product Measures

REAL ANALYSIS I Spring 2016 Product Measures REAL ANALSIS I Spring 216 Product Measures We assume that (, M, µ), (, N, ν) are σ- finite measure spaces. We want to provide the Cartesian product with a measure space structure in which all sets of the

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Quasi-invariant Measures on Path Space. Denis Bell University of North Florida

Quasi-invariant Measures on Path Space. Denis Bell University of North Florida Quasi-invariant Measures on Path Space Denis Bell University of North Florida Transformation of measure under the flow of a vector field Let E be a vector space (or a manifold), equipped with a finite

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

The Lebesgue Integral

The Lebesgue Integral The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters

More information

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1 NONLINEAR EVOLTION EQATIONS FOR MEASRES ON INFINITE DIMENSIONAL SPACES V.I. Bogachev 1, G. Da Prato 2, M. Röckner 3, S.V. Shaposhnikov 1 The goal of this work is to prove the existence of a solution to

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Basic Definitions: Indexed Collections and Random Functions

Basic Definitions: Indexed Collections and Random Functions Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery

More information

Dynamical Systems and Ergodic Theory PhD Exam Spring Topics: Topological Dynamics Definitions and basic results about the following for maps and

Dynamical Systems and Ergodic Theory PhD Exam Spring Topics: Topological Dynamics Definitions and basic results about the following for maps and Dynamical Systems and Ergodic Theory PhD Exam Spring 2012 Introduction: This is the first year for the Dynamical Systems and Ergodic Theory PhD exam. As such the material on the exam will conform fairly

More information

Elliptic Operators with Unbounded Coefficients

Elliptic Operators with Unbounded Coefficients Elliptic Operators with Unbounded Coefficients Federica Gregorio Universitá degli Studi di Salerno 8th June 2018 joint work with S.E. Boutiah, A. Rhandi, C. Tacelli Motivation Consider the Stochastic Differential

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

On Reflecting Brownian Motion with Drift

On Reflecting Brownian Motion with Drift Proc. Symp. Stoch. Syst. Osaka, 25), ISCIE Kyoto, 26, 1-5) On Reflecting Brownian Motion with Drift Goran Peskir This version: 12 June 26 First version: 1 September 25 Research Report No. 3, 25, Probability

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

The Continuity of SDE With Respect to Initial Value in the Total Variation

The Continuity of SDE With Respect to Initial Value in the Total Variation Ξ44fflΞ5» ο ffi fi $ Vol.44, No.5 2015 9" ADVANCES IN MATHEMATICS(CHINA) Sep., 2015 doi: 10.11845/sxjz.2014024b The Continuity of SDE With Respect to Initial Value in the Total Variation PENG Xuhui (1.

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Stochastic Differential Equations

Stochastic Differential Equations Chapter 5 Stochastic Differential Equations We would like to introduce stochastic ODE s without going first through the machinery of stochastic integrals. 5.1 Itô Integrals and Itô Differential Equations

More information

1 Independent increments

1 Independent increments Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad

More information

Hairer /Gubinelli-Imkeller-Perkowski

Hairer /Gubinelli-Imkeller-Perkowski Hairer /Gubinelli-Imkeller-Perkowski Φ 4 3 I The 3D dynamic Φ 4 -model driven by space-time white noise Let us study the following real-valued stochastic PDE on (0, ) T 3, where ξ is the space-time white

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Richard F. Bass Krzysztof Burdzy University of Washington

Richard F. Bass Krzysztof Burdzy University of Washington ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Three hours THE UNIVERSITY OF MANCHESTER. 31st May :00 17:00

Three hours THE UNIVERSITY OF MANCHESTER. 31st May :00 17:00 Three hours MATH41112 THE UNIVERSITY OF MANCHESTER ERGODIC THEORY 31st May 2016 14:00 17:00 Answer FOUR of the FIVE questions. If more than four questions are attempted, then credit will be given for the

More information

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished

More information

New Identities for Weak KAM Theory

New Identities for Weak KAM Theory New Identities for Weak KAM Theory Lawrence C. Evans Department of Mathematics University of California, Berkeley Abstract This paper records for the Hamiltonian H = p + W (x) some old and new identities

More information

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1 Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,

More information

Stochastic optimal control with rough paths

Stochastic optimal control with rough paths Stochastic optimal control with rough paths Paul Gassiat TU Berlin Stochastic processes and their statistics in Finance, Okinawa, October 28, 2013 Joint work with Joscha Diehl and Peter Friz Introduction

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

4. Ergodicity and mixing

4. Ergodicity and mixing 4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

More information

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock WEYL S LEMMA, ONE OF MANY Daniel W Stroock Abstract This note is a brief, and somewhat biased, account of the evolution of what people working in PDE s call Weyl s Lemma about the regularity of solutions

More information

A Short Introduction to Diffusion Processes and Ito Calculus

A Short Introduction to Diffusion Processes and Ito Calculus A Short Introduction to Diffusion Processes and Ito Calculus Cédric Archambeau University College, London Center for Computational Statistics and Machine Learning c.archambeau@cs.ucl.ac.uk January 24,

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

MAGIC010 Ergodic Theory Lecture Entropy

MAGIC010 Ergodic Theory Lecture Entropy 7. Entropy 7. Introduction A natural question in mathematics is the so-called isomorphism problem : when are two mathematical objects of the same class the same (in some appropriately defined sense of

More information

Regularization by noise in infinite dimensions

Regularization by noise in infinite dimensions Regularization by noise in infinite dimensions Franco Flandoli, University of Pisa King s College 2017 Franco Flandoli, University of Pisa () Regularization by noise King s College 2017 1 / 33 Plan of

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

LAN property for ergodic jump-diffusion processes with discrete observations

LAN property for ergodic jump-diffusion processes with discrete observations LAN property for ergodic jump-diffusion processes with discrete observations Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Arturo Kohatsu-Higa (Ritsumeikan University, Japan) &

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

A mathematical framework for Exact Milestoning

A mathematical framework for Exact Milestoning A mathematical framework for Exact Milestoning David Aristoff (joint work with Juan M. Bello-Rivas and Ron Elber) Colorado State University July 2015 D. Aristoff (Colorado State University) July 2015 1

More information

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Divergence theorems in path space II: degenerate diffusions

Divergence theorems in path space II: degenerate diffusions Divergence theorems in path space II: degenerate diffusions Denis Bell 1 Department of Mathematics, University of North Florida, 4567 St. Johns Bluff Road South, Jacksonville, FL 32224, U. S. A. email:

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Notes on Measure Theory and Markov Processes

Notes on Measure Theory and Markov Processes Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Rudiments of Ergodic Theory

Rudiments of Ergodic Theory Rudiments of Ergodic Theory Zefeng Chen September 24, 203 Abstract In this note we intend to present basic ergodic theory. We begin with the notion of a measure preserving transformation. We then define

More information

Divergence Theorems in Path Space. Denis Bell University of North Florida

Divergence Theorems in Path Space. Denis Bell University of North Florida Divergence Theorems in Path Space Denis Bell University of North Florida Motivation Divergence theorem in Riemannian geometry Theorem. Let M be a closed d-dimensional Riemannian manifold. Then for any

More information

Densities for the Navier Stokes equations with noise

Densities for the Navier Stokes equations with noise Densities for the Navier Stokes equations with noise Marco Romito Università di Pisa Universitat de Barcelona March 25, 2015 Summary 1 Introduction & motivations 2 Malliavin calculus 3 Besov bounds 4 Other

More information

Ergodic Properties of Markov Processes

Ergodic Properties of Markov Processes Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information