A Central Limit Theorem for Fleming-Viot Particle Systems 1

Size: px
Start display at page:

Download "A Central Limit Theorem for Fleming-Viot Particle Systems 1"

Transcription

1 A Central Limit Theorem for Fleming-Viot Particle Systems 1 Frédéric Cérou 2 IRIA Rennes & IRMAR, France frederic.cerou@inria.fr Bernard Delyon Université Rennes 1 & IRMAR, France bernard.delyon@univ-rennes1.fr Arnaud Guyader LPSM, Sorbonne Université & CERMICS, France arnaud.guyader@upmc.fr Mathias Rousset IRIA Rennes & IRMAR, France mathias.rousset@inria.fr Abstract Fleming-Viot type particle systems represent a classical way to approximate the distribution of a Markov process with killing, given that it is still alive at a final deterministic time. In this context, each particle evolves independently according to the law of the underlying Markov process until its killing, and then branches instantaneously from the state of another randomly chosen particle. While the consistency of this algorithm in the large population limit has been recently studied in several articles, our purpose here is to prove Central Limit Theorems under very general assumptions. For this, the key suppositions are that the particle system does not explode in finite time, and that the jump and killing times have atomless distributions. In particular, this includes the case of elliptic diffusions with hard killing. Index Terms Sequential Monte Carlo, Interacting particle systems, Process with killing 21 Mathematics Subject Classification: 82C22, 82C8, 65C5, 6J25, 6K35, 6K37 1 This work was partially supported by the French Agence ationale de la Recherche, under grant AR-14-CE23-12, and by the European Research Council under the European Union s Seventh Framework Programme FP/27-213) / ERC Grant Agreement number Corresponding author. 1

2 Contents 1 Introduction 2 2 Main result otation and assumptions Main result Some comments on the asymptotic variance Connection with sequential Monte Carlo methods Example: Diffusion process with hard obstacle Other examples Proof Overview Well-posedness and non-simultaneity of jumps Martingale decomposition Quadratic variation estimates for M and M L 2 -estimate for γ T ϕ) Time uniform estimate for p t Approximation of the quadratic variation of γ Q) Convergence of i t Another formulation of σt 2 ϕ) Martingale Central Limit Theorem Appendix Preliminary on Feller processes Stopping times and martingales Proof of Lemma 3.1: A) A ) About Soft Killing and Carré-du-Champ operator Integration rules Introduction Let X = X t ) t denote a Markov process evolving in a state space of the form F { }, where / F is an absorbing state. X evolves in F until it reaches and then remains trapped there forever. Let us also denote τ the associated killing time, meaning that τ := inf{t, X t = }. 2

3 Given a deterministic final time T >, we are interested both in the distribution of X T given that it has still not been killed at time T, denoted η T := LX T τ > T ), and in the probability of this event, that is p T := Pτ > T ), with the assumption that p T >. We also define η t and p t for t < T accordingly. Without loss of generality, we will assume for simplicity that PX = ) = and p = 1 so that η = LX ). Let us stress that in all this paper, T is held fixed and finite. A crude Monte Carlo method approximating these quantities consists in: simulating i.i.d. random variables, also called particles in the present work, X, 1..., X i.i.d. η, letting them evolve independently according to the dynamic of the underlying process X up to time T, and eventually considering the estimators ˆη T i=1 := 1 XT i F δ X i T i=1 1 and ˆp T := XT i F with the convention that / =. i=1 1 XT i F, It is readily seen that these estimators are not suitable for large T, typically when T E[τ ], since they lead to a rare event estimation problem. The typical situation of interest corresponds to the case where p T is positive but very close to zero. In this case, one has to simulate a sample with size of order 1/p T, which might be intractable in practice. To be more specific, since the random variables XT i are i.i.d., we know that ˆp T has a binomial distribution with parameters and p T, so that the relative variance of ˆp T is equal to 1 p T )/p T ) 1/p T ). By the Central Limit Theorem, we also have ˆp D T p T ), p T 1 p T )). 1.1) Let us emphasize that, even if the probability p T is very low, one might be interested in a precise estimation of this quantity, as well as in the estimation 3

4 of the law of the process given that it is still alive, that is η T. For example, this is the case in sequential Monte Carlo methods for simulating and estimating rare events, in particular concerning the so-called Adaptive Multilevel Splitting algorithm which is at the origin of the work presented here. In [4], we make a connection between the latter and Fleming-Viot particle systems that allows us to deduce some asymptotic properties from the results of the present paper. A possible way to tackle this rare event issue is to approximate the quantities at stake through a Fleming-Viot type particle system [2, 14]. Under Assumptions A) and B) that will be detailed below, the following process is well defined for any number of particles 2: Definition 1.1 Fleming-Viot particle system). The Fleming-Viot particle system X 1 t,, X t ) t [,T ] is the Markov process with state space F defined by the following set of rules: Initialization: consider i.i.d. particles X 1,..., X i.i.d. η, 1.2) Evolution and killing: each particle evolves independently according to the law of the underlying Markov process X until one of them hits or the final time T is reached), Branching or rebirth, or splitting): the killed particle is taken from, and is given instantaneously the state of one of the 1) other particles randomly uniformly chosen), and so on until final time T. Finally, we consider the estimators η T := 1 δ X i T and p T := ) 1 1 T, i=1 where T is the total number of branchings of the particle system until final time T. In other words, T is the empirical mean number of branchings per particle until time T : T := 1 card {branching times T }. We also define η t, p t and t for < t < T analogously. 4

5 Under very general assumptions, Villemonais [14] proves among other things that p T or equivalently e T ) converges in probability to p T when goes to infinity, and that ηt converges in law to η T. In the present paper, we go one step further and establish central limit results for ηt and p T. For this, the key assumptions are that the particle system does not explode in finite time, and that the jump and killing times have atomless distributions. In particular, it includes the case of elliptic diffusive processes killed when hitting the boundary of a given domain. As far as we know, this is the first CLT result in that case of hard obstacle, also called hard killing in the literature. The rest of the paper is organized as follows. Section 2 details our assumptions, and exposes the main results of the paper Theorem 2.6 and Corollary 2.7). It also includes some comments on the asymptotic variance, and ends by exposing examples of applications. Section 3 is dedicated to the proof of the central limit theorem, while Section 4 gathers some technical results. 2 Main result 2.1 otation and assumptions For any bounded ϕ : F R and t [, T ], we consider the unnormalized measure γ t ϕ) := p t η t ϕ) = E[ϕX t )1 t<τ ], with X η = γ. ote that for any t [, T ], one has p t = Pτ > t) = γ t 1 F ), and recall that p = 1 by assumption. The associated empirical approximation is then given by γ t := p t η t. We remark that γ = η. As will become clear from Theorem 2.6 and Corollary 2.7, the study of the final unnormalized measure γt allows us to deduce very easily corresponding properties for both p T and η T. For simplicity, we assume that F is a measurable subset of some reference Polish space, and that for each initial condition, X is a càdlàg process in F { } satisfying the time-homogeneous Markov property, with being an absorbing state. Its transition semi-group is denoted Q, meaning that Q t ) t is a semi-group operator defined for any bounded measurable function ϕ : F R, any x F and any t, by Q t ϕx) := E[ϕX t ) X = x]. 5

6 By convention, in the above, the test function ϕ defined on F is extended on F { } by setting ϕ ) =. Thus, we have Q t ϕ ) = for all t. This equivalently defines a sub-markovian semi-group on F also denoted Q t ) t. ote that just as Q t acts on functions on the right, it can act on finite measures on the left, as any Markovian semi-group. From the Markov property, we get that γ Q t = γ t, and for all t T, γ Q T ϕ) = γ t Q T t ϕ) = γ T ϕ), 2.1) which is similar to what we have for Markovian semi-groups. Furthermore, for any probability distribution µ on F and any bounded measurable function ϕ : F R, the standard notation V µ ϕ) stands for the variance of the random variable ϕy ) when Y is distributed according to µ, i.e. V µ ϕ) := VϕY )) = E[ϕY ) 2 ] E[ϕY )] 2 = µϕ 2 ) µϕ) 2. Our fundamental assumptions can now be detailed. The first one ensures that two different particles never jump nor branch at the same time. Assumption A). This assumption has two parts: i) For any initial condition x F, the jump times of the càdlàg Markov process t X t F { } have an atomless distribution: PX t X t X = x) = t. ii) There exists a linear subspace D of the space C b F ) of bounded measurable real-valued functions on F, which contains at least the indicator function 1 F, and such that for any ϕ D, the mapping x, t) Q t ϕ)x) is continuous on F R +. The following elementary result will be useful. Lemma 2.1. Under Assumption A), the non-increasing mapping t p t = Pτ > t) is continuous and strictly positive on [, T ]. Proof. In fact, this result can be deduced using either i) or ii) of Assumption A). We just give a proof based on i). For any initial condition x F, i) implies that the killing time τ has an atomless distribution in [, ). Indeed, if t = τ then obviously X t X t and we conclude that this event happens with probability at any deterministic time t. ote that τ = may have positive probability. However, under i), the continuity result now comes from the relation p t = Pτ > t) = Pτ > t X = x)η dx). Besides, recall that p T is strictly positive by assumption. F 6

7 Our second assumption ensures the existence of the particle system at all time. Assumption B). The particle system of Definition 1.1 is well-defined in the sense that P T < ) = 1. Since p t = 1 1/) t, the mapping t p t is clearly non-increasing, and Pp t = ) = P T = ), hence we get the following result. Lemma 2.2. Under Assumption B), the non-increasing jump process t p t is strictly positive on [, T ]. Remark 2.3 About the minimal assumptions). The main results of this paper, Theorem 2.6 and Corollary 2.7 below, are in fact established under Assumption B) and a different Assumption A ), detailed in Section 3.2. In a nutshell, A ) amounts to supposing that when constructing the Fleming- Viot particle system, both the jumps of some specific martingales and the killing times can not occur at the same time. In Section 4.3, we prove that A) implies A ). Assumption A ) is thus a weaker assumption than A), but it is more cumbersome to describe and it might be more difficult to check in practice, hence we chose to present Assumption A) first. Remark 2.4 The results of [3] are weaker). In [3], we prove Theorem 2.6 and Corollary 2.7 under two different assumptions, respectively called CC) carré-du-champ operator ) and SK) soft killing ) assumptions. In section 4.4 we show that these assumptions imply A ) and B). This means that the results of the present article encompass those of [3]. However, we refer the interested reader to [3] for further details and examples, as well as for a more elementary proof of Theorem 2.6 in a simpler context. 2.2 Main result Recall that X 1 t,..., X t ) t denotes the Fleming-Viot particle system, introduced in Section 1. Definition 2.5. For any n {1,..., } and any k, we denote by τ n,k the k-th killing time of particle n, with the convention τ n, =. Moreover, for any j, we denote by τ j the j-th killing time for the whole system of particles, with the convention τ =. Accordingly, the processes n t := k 1 1 τ n,k t and t := 1 n=1 n t 7 = 1 j 1 1 τj t

8 are càdlàg counting processes that correspond respectively to the number of branchings of particle n before time t, and to the total number of branchings per particle of the whole particle system before time t. As mentioned before, we can then define the empirical measure associated to the particle system as ηt := 1 n=1 δ Xt n, while the estimate of the probability that the process is still not killed at time t is denoted p t := 1 1 )t, and the unnormalized empirical measure is defined as γt := p t ηt. As will be recalled in Proposition 3.13 and already noticed by Villemonais in [14], their large limits are respectively η t ϕ) := E[ϕX t ) X t ], p t := PX t ), and γ t ϕ) := E[ϕX t )1 Xt ]. We clearly have η t ϕ) = γ t ϕ)/γ t 1 F ) = γ t ϕ)/p t and γ t ϕ) = η Q t ϕ). We can now expose the main result of the present paper. Recall that V η ϕ) denotes the variance of ϕ with respect to the distribution η. Theorem 2.6. Let us denote by D the closure in C b F ) with respect to the norm of the space D satisfying Condition ii) of Assumption A). Then, under Assumptions A) and B), for any ϕ in D, one has the convergence in distribution γ T ϕ) γ T ϕ) ) D, σ2 T ϕ)), where σt 2 ϕ) is defined by σ 2 T ϕ) := p 2 T V ηt ϕ) p 2 T lnp T ) η T ϕ) 2 2 T V ηt Q T t ϕ))p t dp t. The variance σt 2 ϕ) is linked to the asymptotic variance of the final value of the martingale γt Q T t ϕ) ). A sketch of the proof is given in Section 3.1. For now, just notice that since 1 F D by assumption, and remembering that γ T 1 F ) = p T, the CLT for ηt is then a straightforward application of this result by considering the decomposition η T ϕ) η T ϕ) ) = 1 γ γt 1 F ) T ϕ η T ϕ)) γ T ϕ η T ϕ)) ), and the fact that p T = γ T 1 F ) converges in probability to p T = γ T 1 F ). Corollary 2.7. Under Assumptions A) and B), for any ϕ in D, one has the convergence in distribution η T ϕ) η T ϕ) ) D, σ2 T ϕ η T ϕ))/p 2 T ). 8

9 In addition, where p T p T ) D, σ2 ), T σ 2 := σt 2 1 F ) = p 2 T lnp T ) 2 V ηt Q T t 1 F ))p t dp t. Remark 2.8 on independent initial conditions). As will be clear from Step i) in the proof of Proposition 3.13 and from the proof of part a) of Proposition 3.24, Theorem 2.6 and Corollary 2.7 both still hold true when the i.i.d. assumption on the initial condition 1.2) is relaxed and replaced by the following set of conditions: i) the initial particle system X 1,..., X ) is exchangeable, ii) its empirical distribution η = γ satisfies [ η E Q T ϕ)) η Q T ϕ)) ) ] 2 c ϕ 2, for some constant c >, and iii) the following CLT is satisfied: for any ϕ D, η Q T ϕ)) η Q T ϕ)) ) D, V η Q T ϕ))). In the next subsection, we propose to focus our attention on the estimator p T in order to discuss the asymptotic variance given by Corollary Some comments on the asymptotic variance Recall that where In this expression, notice that ) p D T p T, σ2 ), T σ 2 = p 2 T lnp T ) 2 V ηt Q T t 1 F ))p t dp t. 2.2) V ηt Q T t 1 F )) = VPX T X t )) = E [ PX T X t ) p ) ] 2 T. 2.3) p t Here PX T X t ) is a random variable with values between and 1, and expectation p T /p t. Hence the maximal possible value for the variance is obtained for a Bernoulli random variable with parameter p T /p t, so that V ηt Q T t 1 F )) p T 1 p ) T. p t p t 9

10 Taking into account that the integral term in 2.2) is non positive, we finally get the following bounds for the asymptotic variance of the probability estimate: p 2 T lnp T ) σ 2 2p T 1 p T ) + p 2 T lnp T ). 2.4) According to 2.3), the lower bound is reached when, for each t [, T ], the probability of being still alive at time T is constant on the support of the law η t. This situation includes, but is not limited to, the trivial case where the killing intensity is constant and equal to λ on the whole space F, meaning that for any initial condition, τ has an exponential distribution with parameter λ. Then, obviously, we get V ηt Q T t 1 F )) =. In fact, in this elementary framework, one can be much more precise about the estimator p T = 1 1 ) T. Indeed, a moment thought reveals that T ) t is just a Poisson process with intensity λ, so that T has a Poisson distribution with parameter λt, and p T is a discrete random variable with law P p T = 1 1 ) ) k λt λt )k = e k! k. In particular, it is readily seen that this estimator is unbiased: with variance E[p T ] = e λt = PX T ) = p T, Vp T ) = p T ) 2 e λt/ 1 ) = lim Vp T ) = p 2 T lnp T ), which is exactly the lower bound in 2.4). Let us also notice also that in this ideal case, the asymptotic variance is much better than the one of the crude Monte Carlo estimator ˆp T as given by 1.1). Clearly, when p T, p 2 T lnp T ) p T 1 p T ) p T. By contrast, the upper bound in 2.4) may be surprising at first sight. Indeed, by 1.1), the crude Monte Carlo estimator ˆp T satisfies ) D ˆp T p T, p T 1 p T )). As 2p T 1 p T )+p 2 T lnp T ) p T 1 p T ) for any p T [, 1], this suggests that there are some situations where the Fleming-Viot estimator is less precise than the crude Monte Carlo estimator. More precisely, if p T is small, then 1

11 2p T 1 p T ) + p 2 T lnp T ) 2p T 1 p T ), that is almost twice less precise in terms of asymptotic variance. Although counterintuitive, this phenomenon can in fact be observed on a toy example. Take F = {, 1} for the state space, η = pδ + 1 p)δ 1 with < p < 1 for the initial distribution, λ = < λ 1 for the killing rates, and consider the process X t = X until time τ. In other words, nothing happens before killing and the process can be killed if and only if X = 1. Suppose that our goal is to estimate the probability p 1 that the process is still alive at time T = 1. Clearly, for all t, one has p t = p + 1 p) exp λ 1 t) and the law of X t given that the process is still alive at time t writes η t = 1 p t pδ + 1 p) exp λ 1 t)δ 1 ). Since, for any t [, 1], we deduce that PX 1 X t ) = 1 Xt= + exp λ 1 1 t))1 Xt=1, V ηt Q T t 1 F )) = p + 1 p)e λ 12 t) p t p1 p t ) 2 = p 1 p) 2 p t p t p) + p p t p1 p t ) 2. Therefore, taking T = 1 in 2.2), the asymptotic variance is equal to σ 2 = 2p1 p 1 ) + 2p 1 p) 2 ln 1 p p 1 p + p2 1 ln p 1. Finally, remark that p 1 can be made arbitrarily close to p by taking λ 1 sufficiently large, which in turn leads to a variance that is arbitrarily close to the upper bound in 2.4). Therefore, the take-home message is that we can exhibit pathological examples where the application of Fleming-Viot particle systems is in fact counterproductive compared to a crude Monte Carlo method. Intuitively, the branching process in Fleming-Viot simulation improves the focus on rare events, but creates dependency between trajectories, which may be strong. 2.4 Connection with sequential Monte Carlo methods In this section we will compare our results with what happens with the following discrete time algorithm, which corresponds to a sequential Monte Carlo algorithm see for example [5]). We start with a given finite set of 11

12 times t = < t 1 < < t n = T. Let us assume to simplify that the t j s are evenly spaced in terms of survival probability, that is p tj /p tj 1 = pn) for all j, with pn) 1 when n. We start with independent copies of the process X and run them until time t 1. The ones having reached are then killed, and for each one killed, we randomly choose one that is not and duplicate it. Then we run the new and old not killed) trajectories until time t 2, and iterate until we reach time t n = T. If at some point all the trajectories are killed, i.e. if they all have reached, then we consider that the run of the algorithm has failed and we call this phenomenon an extinction. This discrete version of the algorithm falls in the framework of [5], so we can apply the results therein. Among various convergence results, we will specifically focus on CLT type theorems and compare them to our setting. Let us also mention that the extinction probability is small when is large: specifically, there exist positive constants a and b such that the probability of extinction is less than a exp /b) Theorem in [5]). At each t k, we denote by η k the empirical measure of the particles just before the resampling. We can estimate the probability Pτ > T ) by n η k 1 F ) = γ n 1 F ) η n 1 F ) with γ n 1 F ) = k=1 n 1 k=1 η k 1 F ). We also define the unnormalized measures through their action on test functions ϕ by γ n ϕ) = γ n 1 F ) η n ϕ). As previously, we will assume that ϕ ) =, which implies that for all t, Q t ϕ) ) =. The following CLT is then a straightforward generalization of Theorem and the following pages of [5] : 1τ >n γ n ϕ) γ T ϕ) ) D, σ2 nϕ)), with τ the extinction iteration of the particle system, and σ 2 nϕ) = a n b n, where a n = η Q T ϕ η Q T ϕ)) 2 ) + n γ tj 1 1 F ) 2 η tj Q T t j ϕ η tj Q T t j ϕ)) 2 ), j=1 and b n = n γ tj 1 1 F ) 2 η tj 1 1 F Q T t j 1 ϕ η tj Q T t j ϕ)) 2 ), j=1 12

13 with η tj = pn)η tj + 1 pn))δ. We do not have exactly η tj because it is an updated measure, while the CLT of [5] applies to predicted measures see [5] Sections and for a discussion on the difference). After some very basic algebra, this asymptotic variance can be written as σ 2 nϕ) = n γ tj 1 1 F ) pn)η 2 tj Q T t j ϕ) 2 ) η tj 1 Q T t j 1 ϕ) 2 )) j=1 +η Q T ϕ η Q T ϕ)) 2 ). 2pn) 2 η tj Q T t j ϕ)η tj Q T t j ϕ) η tj 1 Q T t j 1 ϕ)) ) + pn) 2 η tj Q T t j ϕ) 2 1 η tj 1 Q t j t j 1 1 F )) ow, we should remember that p t = γ t 1 F ), and that 1 η tj 1 Q t j t j 1 1 F ) = 1 γ t j 1 F ) γ tj 1 1 F ) = p t j 1 p t j p tj 1. If we make n, which implies that sup j t j t j 1 ), we have, at least formally, that σ 2 nϕ) σ 2 ϕ) with T σ ϕ) 2 = p 2 t T d T dt η tq T t ϕ) 2 ))dt 2 p 2 t η t Q T t ϕ) d dt η tq T t ϕ))dt p t η t Q T t ϕ) 2 p tdt + η Q T ϕ η Q T ϕ)) 2 ). By integrating by parts the first two integrals, noticing that p t p tη t Q T t ϕ) 2 = p 2 T η T ϕ) 2 p t p t in the third one, and replacing p tdt with dp t, we get that T σ ϕ) 2 = p 2 T V ηt ϕ) p 2 T lnp T ) η T ϕ) 2 2 V ηt Q T t ϕ))p t dp t, which is exactly the asymptotic variance σt 2 ϕ) in Theorem 2.6. In other words, the asymptotic variance of the continuous time algorithm can be interpreted as the limit of the asymptotic variance of the discrete time algorithm, when the time mesh becomes infinitely fine, i.e. when the number of resamplings goes to infinity. 13

14 2.5 Example: Diffusion process with hard obstacle Before proceeding with the proof of Theorem 2.6, let us give some examples of applications. We show in this section how our CLT applies to Fleming- Viot particle systems based on a Feller process killed when hitting a hard obstacle. As far as we know, this is the first CLT result in that case of hard obstacle. Yet, there is a cluster of papers studying the hard obstacle case where the underlying process is a diffusion in a bounded domain of R d killed when it hits the domain boundary. Among other questions, the convergence of the empirical measures as goes to infinity is addressed in [1, 7, 11] see also references therein). This case is also included in the general convergence results of [14]. Let t X t be a Feller process in a locally compact Polish space E, and let F be a bounded open domain with boundary F = F \ F. Let τ be the hitting time of E \ F, and set X t = { Xt if t < τ if t τ We consider the set of continuous and bounded functions D = C b F ) extended as usual to F { } by setting ϕ ) = if ϕ D. ote that 1 F D. The difficulty in checking Assumption A) is the continuity with respect to t of the mapping x, t) Q t ϕ)x) because of the indicator function in Q t ϕx) = E[ϕX t )1 t<τ X = x]. However, we have the following general result: Proposition 2.9. Assume that F is open, that the process X is Feller, and the following two conditions: i) For all x F and all t, P X t F X = x) =. ii) For all x F, Pτ > X = x) =. Then Assumption A) is fulfilled with D = C b F ). The proof is given in Section 4.1. Using the latter, we can prove Assumption A) for regular elliptic diffusions. Proposition 2.1. Assume that F is open and bounded in R d with smooth boundary F, and that X is a diffusion with smooth and uniformly elliptic coefficients. Then Assumption A) holds true. 14

15 Proof. This is a direct application of Proposition 2.9. First, the fact that X is a Feller process can be found for example in [6], Chapter 8, Theorem 1.6. ext, point i) is obviously true because the first passage time through F of an elliptic diffusion has a density with respect to Lebesgue s measure. Finally, point ii) is also satisfied since the entrance time in the interior of a smooth domain from its boundary by an elliptic diffusion is. This classical fact can for example be proved by applying Itô s formula to a smooth level function defining the domain, and then the law of the iterated logarithm for the Brownian motion. Assumption B) is more technical and depends on the specific case at hand. For instance, in [8], the authors prove that it is satisfied for regular diffusions and smooth boundary. Specifically, they give a general set of sufficient assumptions for non explosion, some of them being further generalized in [14]. The upcoming result is exactly Theorem 1 of Section 2.1 in [8], in the simple case of smooth domains. Proposition Assume that F is open and bounded in R d with smooth boundary F, and that X is a diffusion with smooth and uniformly elliptic coefficients. Then Assumption B) is satisfied. Putting all things together, we conclude that if F is open and bounded in R d with smooth boundary F, and if the diffusion X has smooth and uniformly elliptic coefficients, then one can apply the CLT results of the present paper. 2.6 Other examples The assumptions in [3] are more restrictive than the ones required in the present paper. Therefore, we refer the reader to [3] for examples of Piecewise Deterministic Markov Processes and Diffusions with Soft Killing to which Theorem 2.6 and Corollary 2.7 apply as well. 3 Proof 3.1 Overview The key object of the proof is the càdlàg martingale t γt Q) := γt Q T t ϕ) ), the fixed parameters T and ϕ being implicit in order to lighten the notation. Since γ = η and γ = η, the difference ) ) γt ϕ) γ T ϕ) = γt Q) γ Q) + η Q T ϕ)) η Q T ϕ)) 15

16 is the final value of the centered martingale γt Q) γ Q), with the addition of a second term depending on the initial condition. ote that this second term satisfies a CLT since the initial conditions X, 1..., X are i.i.d. according to Definition 1.1. We will handle the distribution of γt Q) in the limit by using a Central Limit Theorem for continuous time martingales, namely Theorem However, this requires several intermediate steps, mainly for the calculation of the quadratic variation [γ Q), γ Q)] t. Unfortunately, showing the convergence of this quadratic variation is not easy. Specifically, it is much more difficult than in [3] where, thanks to the so-called carré-du-champ operator and soft killing assumptions, we could write the predictable quadratic variation as an integral against Lebesgue s measure in time, with bounded integrand. We could then easily show the pointwise convergence of the integrand and apply dominated convergence. Here we cannot do that. Instead, the key idea is to replace the quadratic variation by an adapted increasing process i t such that [γ Q), γ Q)] t i t is a local martingale. Finally, the convergence of i T to σ2 T ϕ) V η Q T ϕ) ) requires some appropriate timewise integrations by parts formulas, as well as the uniform convergence in time of p t to p t. Let us finally mention that, in the sequel, we will make extensive use of stochastic calculus for càdlàg semimartingales, as presented for example in [12] chapter II or [9]. 3.2 Well-posedness and non-simultaneity of jumps In the remainder, we adopt the standard notation X t = X t X t and, to lighten the notation, we will denote for l = 1, 2, γ t Q l ) := γ t [ Q T t ϕ) ] l ). 3.1) Recall that X n t stands for the position of the particle with index n at time t. Let us fix T and ϕ, and denote for each 1 n and any t [, T ], L n t := Q T t ϕ)x n t ) and L t := 1 L n t, n=1 where the parameters T, ϕ and are omitted in order to lighten the notation. We first present a weaker but less practical version of Assumption A), named Assumption A ). Lemma 3.1 ensures that A) implies A ). All the results of the present paper are obtained under Assumption A ). This technical assumption is in fact the minimal requirement on the non simultaneity of the 16

17 branching and jump times. In particular, Condition A )i) states that a single particle branches at each killing time, making the Fleming-Viot branching rule well-defined. Assumption A ). There exists a space D of bounded measurable real-valued functions on F, which contains at least the indicator function 1 F, and such that for any ϕ D, t L n t is càdlàg for each 1 n, and: i) Only one particle is killed at each branching time: if m n, then τ m,j τ n,k almost surely for any j, k 1. ii) The processes L m t and L n t never jump at the same time: if m n, then P t, L m t L n t ) =. iii) The process L n t m n, then never jumps at a branching time of another particle: if P j, L n τ m,j ) =. Recall that, by Lemma 2.1, under Condition i) or ii) of Assumption A), the non-increasing mapping t p t = Pτ > t) is continuous and strictly positive on [, T ]. This property still holds under Assumption A )i) and the proof is similar. Besides, as will be shown in Section 4.3, it turns out that A) implies A ). This is the purpose of the following lemma. Lemma 3.1. Under Assumption A), the system of particles satisfies Assumption A ) with the same set D of test functions. Then, under Assumption A) or A ), it is easy to upper-bound the jumps of γ t Q) and γ t Q 2 ). Corollary 3.2. Under Assumption A ), one has γt Q) 3 ϕ as γt Q 2 ) 2 ϕ 2. Proof. Since γ t Q) = p t L t, one has as well γ t Q) = p t L t ) = p t L t p t L t. First case: if p t p t, this implies that L t L t = L t + L t and that p t = 1 1/)p t, with p t 1, hence we have γ t Q) = p t 1 1/)L t L t 1 1/)L t L t L t + L t /. 17

18 Recall that L t = n=1 Ln t )/, and since the jumps of L n t and L m t don t coincide, we deduce that L t 1 max 1 n Ln t = 1 max Q T t ϕ)xt n ) 2 ϕ 1 n. Since L t ϕ, we finally get γ t Q) 3 ϕ. Second case: if p t = p t but L t L t, then by the same reasoning as above, γ t Q) = p t L t ) = p t L t p t L t L t 2 ϕ. The last and trivial situation is p t = p t and L t = L t, in which case γ t Q) =. Finally, since Q T t ϕ)x n t )) 2 ϕ 2, one has {Q T t ϕ)x n t )) 2 } ϕ 2, and the same reasoning applies to γ t Q 2 ). The rest of the paper is mainly devoted the proof of the following result. We recall that D is the closure of D in C b F ) with respect to the norm. Proposition 3.3. Under Assumptions A ) and B), for any ϕ in D, one has γ T ϕ) γ T ϕ) ) D, σ2 T ϕ)), where σ 2 T ϕ) = p 2 T V ηt ϕ) p 2 T lnp T ) η T ϕ) 2 2 T Thanks to Lemma 3.1, Proposition 3.3 implies Theorem Martingale decomposition V ηt Q T t ϕ))p t dp t. This section will build upon the martingale representation of [14]. We decompose the process t γt Q) into the martingale contributions of the Markovian evolution of particle n between branchings k an k + 1, which will be denoted t M n,k t, and the martingale contributions of the k-th branching of particle n, which will be denoted t M n,k t. 18

19 Remark 3.4. Throughout the paper, all the local martingales are local with respect to the sequence of stopping times τ j ) j 1. As required, this sequence of stopping times satisfies lim j τ j > T almost surely by Assumption B). Recall that we have defined for each 1 n and any t [, T ], L n t := Q T t ϕ)x n t ), and L t := 1 n=1 Ln t, so that γ t Q) = γ t Q T t ϕ)) = p t L t. If Xt is any particle evolving according to the dynamic of the underlying Markov process for and only for) t < τ, then it is still true that Q T t ϕ) X t )1 t<τ is a martingale. As a consequence, for any n {1,..., } and any k 1, Doob s optional sampling theorem ensures that by construction of the particle system, the process ) M n,k t := 1 t<τn,k L nt L nτn,k 1 1 t τn,k 1 = if t < τ n,k 1 L n t L n τ n,k 1 if τ n,k 1 t < τ n,k L n τ n,k 1 if τ n,k t 3.2) is a bounded martingale. Accordingly, under Assumption B), the processes M n t := M n,k t = L n t L n τ n,k, 3.3) are local martingales. k=1 M t := 1 n=1 τ n,k t M n t, 3.4) For any n {1,..., } and any k 1, we also consider the process M n,k t := 1 1 ) L n τ n,k 1 ) L m τ 1 n,k )1 t τn,k = L nτn,k L τn,k 1 t τn,k, m n 3.5) which by Lemma 4.6 is a constant martingale with a single jump at t = τ n,k, and which is clearly bounded by 2 ϕ. Then, under Assumption B), the processes M n t := k=1 M t := 1 M n,k t = n=1 M n t, τ n,k t 19 L n τ n,k L τn,k ),

20 are also local martingales. Recalling the notation n t := k 1 1 τn,k t for the number of branchings of particle n before time t and t := 1 n=1 n t = 1 j 1 1 τj t the total number of branchings per particle before time t, 3.2) and 3.5) respectively implies that for each 1 n so that the sum yields dm n t = dl n t L n t d n t 3.6) dm n t = L n t L t ) d n t, 3.7) dm t + dm t = dl t L t d t ). 3.8) Let us emphasize that, in the above equations, L t = L t + is right-continuous. since the process L t Remark 3.5. By definition, the jumps of the martingales M n t are included in the union of the set of the jumps of L n t, and the set of the branching times τ n,k for k 1. The jumps of M n t are included in the set of the branching times τ n,k for k 1. Therefore, Assumption A ) implies that for m n, the jumps of M m t and M n t can t happen at the same time, the same being true for M m t and M n t. The following rule will be useful throughout the paper. Lemma 3.6. Recalling that p t = 1 1 )t, it holds that dp t = p t d t. 3.9) Proof. When t t, one has p t = 1 1 )t 1 1 )t 1 = 1 1 )t ) while t = 1 for t = τ j, j 1. Hence the result follows. The upcoming result attests that the process t γ t Q) is indeed a martingale and details its decomposition. 2

21 Lemma 3.7. We have the decomposition Proof. Recalling that p t integration by parts γt Q) = γ Q) + 1 p u dm u + dm u ). 3.1) is a piecewise constant process, one has by plain γ t Q) = p t L t = γ Q) + p u dl u + L u dp u where we emphasize that in the above equation, the last integrand is indeed L u = L u +. Besides, by 3.9), we are led to γ t Q) γ Q) = The result is then a direct consequence of 3.8). p u dl u L u d u ). Remark 3.8. Since γ T ϕ) = γ Q T ϕ), this implies the unbiasedness property E [ γ T ϕ) ] = γ T ϕ) for all 2. In particular, the case ϕ = 1 F gives E [ p T ϕ) ] = p T >. 3.4 Quadratic variation estimates for M and M The remarkable fact is that the 2 martingales {M n t, M m t } 1 n,m are mutually orthogonal. We recall that two local martingales are orthogonal if their quadratic covariation is again a local martingale. Lemma 3.9. Under Assumptions A ) and B), the 2 local martingales {M n t, M m t } 1 n,m are mutually orthogonal. In addition, [ M, M ] t = 1 [ M n, M n ] t. n=1 Orthogonality implies that the process [ M, M ] t is a local martingale, and denoting A t := 1 [ M n, M n ] t, n=1 that the process [ M, M ] t A t is also a local martingale. In addition, the jumps of A are controlled by A t ϕ ) 21 ),

22 Proof. By Assumption A ) see also Remark 3.5), for n m, the piecewise constant martingales M n t and M m t do not vary at the same times, so that [ M n, M m ] t = and the two martingales are a fortiori orthogonal. In the same manner, for n m, the martingales M n t and M m t do not vary at the same times, so that [ M n, M m ] t = and the two martingales are a fortiori orthogonal. Moreover, since M n is a pure jump martingale, we have by definition of M n t d [ M n, M n ] t = M n t dm n t = L n t dmn t, which defines a martingale, so that M n and M n are orthogonal. ext, we claim that the product M m M n is a martingale, implying the orthogonality. Indeed, for a given s [, T ], let us define σ i := τ i T ) s the stopping time in [s, T ] closest to the i-th branching time. For any i 1, conditional to F σi 1, M n t 1 t<σi ) t and M m t 1 t<σi ) t are by construction independent, hence we have E [ M m M n σ i σ i F σi 1 ] = M m σ i 1 M n σ i 1. In addition, since the martingales M m and M n do not jump simultaneously, it yields M m σ i M n σ i = M m σ i M n + M m M n σ i σ σ i M m M n, so that i σ i σ i E[M m σ i M n σ i F σ ] = E[Mn M m i σ σ i i M m ) + M m M n σ i σ σ i i M n ) + M m M n F σ i σ i σ i σ = M m, σ i M n σ i and combining these equations gives E [ M m σ i M n σ i Fσi 1 ] = M m σi 1 M n σ i 1. By iterating on i 1 and taking into account that σ = s and lim i σ i = T, we obtain which shows the claimed result. E [M m T M n T F s ] = M m s M n s, For the last point, Assumption A ) guarantees that A t = 1 max 1 n [Mn, M n ] t = 1 max 1 n Mn t ) 2, and the indicated result is now a direct consequence of 3.2) and 3.3). 22 i ]

23 In the same way as in 3.1), for each t [, T ], we use in the upcoming lemma the notation ) V η t Q) := V η t Q T t ϕ)) = 1 L n t ) 2 1 L n t ) Lemma 3.1. One has n=1 n=1 d [ M, M ] t 4 ϕ 2 d t. 3.13) Moreover, there exist a piecewise constant local martingale M t and a piecewise constant process R t, both with jumps at branching times, such that with the following estimate d [ M, M ] t = V η t Q)d t + 1 dr t + 1 d M t, 3.14) R t 14 ϕ ) Proof. Considering the orthogonality property in Lemma 3.9, and taking into account that the martingales M n,k are piecewise constant with a single jump at time τ n,k, we have [ M, M ] t = 1 n=1 k=1 M n,k τ n,k )2 1 t τn,k. This implies 3.13) since M n,k τ n,k = L n τ n,k L τn,k 2 ϕ. This equation also implies that 3.14) holds true with M t := 1 R t := n=1 k=1 n=1 k=1 E M n,k τ n,k ) 2 E [ M n,k τ n,k ) 2 Fτ n,k [ M n,k τ n,k ) 2 Fτ n,k ] ]) 1 t τn,k ) V η Q) 1 t τn,k. τ n,k On the one hand, Lemma 4.6 ensures that M t is a càdlàg local martingale. On the other hand, by Assumption A ), we have L l τ n,k = L l for all l n, τ n,k so that 3.5) becomes M n,k τ n,k = ) ) 1 1 L n τ 1 n,k l n L l τ n,k

24 Then, by construction of the branching rule, given F τ, L n τ n,k n,k is uniformly drawn among the L m τ n,k ) m n, which yields [ M ) n,k 2 ] E Fτ τ n,k = 1 n,k 1 ) m n L m τ n,k L l. τn,k) If we temporarily denote the empirical distribution without particle n by η n) t := 1 δ X m 1 t, m n we can now reformulate the latter using notation 3.12) as [ M ) n,k 2 ] E Fτ τ n,k = ) V n) n,k η Q). In other words, we have 1 ) R t = 1 2 V n) η n=1 k=1 τ n,k τ n,k Q) V η Q) τ n,k ) l n 1 t τn,k. For the last statement, notice that for two probability measures µ and ν with total variation distance µ ν tv and for any test function f, V µ f) V ν f) µ ν)f 2 ) + µ ν)f)µ + ν)f) 6 µ ν tv f 2 so that, for any n and k, Rτn,k 1 1 ) 2 Vη n) ϕ 2. τ n,k Q) V η τ n,k Q) + ) 2 1) ) ) ) V η Q) τ n,k ) ϕ ϕ 2 Remark A byproduct of the previous proof is the following equation, which will prove helpful in Definition 3.15 below: 1 R t + V η Q)d s = ) s n=1 V η n) s Q)d n s. 3.16) This identity will be useful to show that the left hand side is increasing because the right hand side obviously is. This will be used to construct a suitable quadratic variation to the martingale in 3.1). 24

25 The next lemma is a very important step of the analysis. It relates the quadratic variation of the local martingale t M t - given, up to a martingale additive term, by the increasing process t A t defined in Lemma 3.7 -, with the process t γt Q 2 ). This will yield estimates on A t. ote that this idea is inspired by the fact that by definition of the quadratic variation, for any Markov process X, the process t [ Q T t ϕ)x t ) ] 2 equals the quadratic variation of the martingale t Q T t ϕ)x t ) up to a martingale additive term. Lemma There exists a local martingale M t ) t such that dγ t Q 2 ) = p t da t + 1 p t d M t. 3.17) In particular, this implies that [ ] E p s da s = E [ γt Q 2 ) γ Q 2 ) ] ϕ ) Moreover, we have as well as [ ] E p u d[ M, M] u 5 ϕ 4, 3.19) M u 5 ϕ ) Proof. Differentiating γ t Q 2 ) := p t 1 n=1 Ln t ) 2 yields dγ t Q 2 ) = 1 p t d L n t ) 2) + L n t ) 2 dp t. n=1 Since dp t = p t d t, one gets dγ t Q 2 ) = 1 p t d L n t ) 2) ) L n t ) 2 d t. 3.21) n=1 ext we claim that dl n t ) 2 L n t ) 2d n t = d[m n, M n ] t + 2L n t dmn t. 3.22) 25

26 We know from 3.6) that dm n t = dl n t L n t d n t, so that the bilinearity of the quadratic variation gives d[m n, M n ] t = d[l n, L n ] t + ) [ ] 2d L n n t t 2d L n d n, L n t = d[l n, L n ] t + ) 2d L n n t t 2 L n t ) L n t dt n = d[l n, L n ] t + L n t 2L n t ) Ln t d n t. Then, using again 3.6) through dl n t = dm n t + L n t d n t, it comes dl n t ) 2 = 2L n t dln t + d[l n, L n ] t ) = 2L n t dmn t + 2L n t Ln t dt n + which immediately simplifies into 3.22). d[m n, M n ] t L n t 2L n t ) Ln t d n t ), Putting 3.21) and 3.22) together, considering in Lemma 3.9 the definition A := 1 n [Mn, M n ], and recalling that := 1 n n, we obtain dγ t Q 2 ) = p t da t + p t = p t da t + p t n=1 n=1 [ L n t ) 2 d n t [ L n t ) 2 1 d t ) + 2L n t dmn t m=1 L m t ) 2 ) d n t ] + 2L n t dmn t ], and we see that 3.17) is satisfied with d M t = 1 n=1 J n t d n t + 2L n t dmn t, 3.23) where we have defined Jt n := L n t ) 2 1 L m t ) 2 = 1 1/) L n t ) 2 1 L m t ) ). 2 1 m=1 ) ote that, in the same fashion as M n t t ) t, J n s ds n t is a local martingale since, for each k 1, t Jt n 1 t τn,k is a bounded martingale by definition of the branching rule and Lemma 4.6. Moreover, in the same way as in Lemma 3.7, the 2 local martingales { ) ) } L m s dmm s t, Js n ds n t 26 1 n,m m n

27 are all orthogonal to each other. Indeed, for any pair n, m, i) [M n, M m ] is a martingale by Lemma 3.7, ii) [M n, M m ] = and [M n, M m ] = if n m by Assumption A ). The only new point to check is that the quadratic covariation [ ] d L n s dmn s, Js n ds n = L n t )2 Jt n dt n is indeed a [ local martingale, ] which is a consequence of the branching rule implying E Jτ n n,k F τ = and Lemma 4.6. n,k To establish 3.19) and 3.2), we recall that for any 1 n sup t J n t ϕ 2, sup t L n t ϕ, and sup t M n t 2 ϕ. For 3.19), we apply Itô s isometry to 3.23) and use orthogonality to get E t p u d[ M, M] u = 1 [ u ] E p u J u n ) 2 du n + 4 p u Ln t )2 d[m n, M n ] u n=1 [ ] [ ] ϕ 4 E p u d u + 4 ϕ 2 E p u da u ϕ ϕ 4. j=1 1 1 ) j ϕ 2 E [ γ t Q 2 ) γ Q 2 ) ] In order to obtain 3.2), consider 3.23) and recall from Assumption A ) that, for n m, t n t m = and M n t M m t =. We then deduce that M t 1 ) sup Jt n + 2 sup L n t sup M n t 5 ϕ 2. t t t 3.5 L 2 -estimate for γ T ϕ) The convergence of γ T ϕ) to γ T ϕ) when goes to infinity is now a direct consequence of the previous results. This kind of estimate was already noticed by Villemonais in [14]. Proposition For any ϕ D, we have E [ γ T ϕ) γ T ϕ) ) 2 ] 6 ϕ 2. 27

28 Proof. Thanks to Lemma 3.7 and the fact that γ T ϕ) = γ Q T ϕ), we have the orthogonal decomposition γ T ϕ) γ T ϕ) = 1 T p t dm t+ 1 T p t dm t+γ Q T ϕ) γ Q T ϕ), and it is easy to upper-bound the individual contribution of each term to the total variance. i) Initial condition. Since γ = η and γ = η, we have E [ γ Q T ϕ) γ Q T ϕ) ) 2 ] = 1 V η Q T ϕ)x)) 1 QT ϕ) 2 1 ϕ 2. ii) M-terms. Using Itô s isometry and 3.13), we obtain [ T ) 2 ] [ T ] E p t dm ) t = E p 2d[M, t M]t 4 ϕ 2 1 ) 1 1 2j 1) 4 ϕ 2. iii) M-terms. In the same way, applying Itô s isometry and 3.17), we get [ T ) 2 ] [ T ] E p t dm ) t = E p 2d[M, t M]t j=1 [ T ] E p t da t = E [ γt Q 2 ) ] ϕ 2. In particular, Proposition 3.13 implies that for any ϕ in D, γ t ϕ) converges in probability to γ t ϕ) when goes to infinity. Since we have assumed that 1 F belongs to D, the probability estimate p t goes to its deterministic target p t in probability. The next subsection provides a stronger result. 3.6 Time uniform estimate for p t In this section, we prove the convergence of sup t [,T ] p t p t to in probability by using the time marginal convergence of Proposition Recall that, by Assumption A) or A ), the mapping t p t is continuous see Lemma 2.1). Hence, the proof only uses this argument and the monotonicity of t p t. One can merely see it as a Dini-like result. 28

29 Lemma One has sup p t p t P t [,T ]. Proof. Since the mapping t p t is continuous on [, T ] by Lemma 2.1, it is uniformly continuous. Hence, for any ε >, there exists a subdivision {t = < t 1 < < t J = T } such that, for any 1 j J and any t in [t j 1, t j ], one has max p t p tj 1, p t p tj ) ε. Hence, since t p t is decreasing, it is readily seen that p t p t max p t j 1 p t, p t p t j ) ε + max p t j 1 p tj 1, p t j p tj ). Consequently, with probability 1, uniformly in t [, T ], we get p t p t ε + max j J p t j p tj. Taking ϕ = 1 F and T = t j in Proposition 3.13 ensures that Therefore, we have max j J p P t j p tj. P sup p t p t > 2ε) P max t [,T ] j J p t j Since ε is arbitrary, we get the desired result. p tj > ε). 3.7 Approximation of the quadratic variation of γ Q) As will become clear later, the following process represents a useful approximation of [ γ Q), γ Q) ] t. Definition For each ϕ D and T >, we define for t [, T ] the càdlàg increasing process i t := p u ) 2 da u V η u Q)p u dp u + 1 p u ) 2dRu. 3.24) The fact that this process is increasing comes from 3.16) and dp t = p t d t, which yields the alternative formulation V η t Q)p t dp t + 1 p t ) 2dRt = p t ) 2 1 1/) 2 n=1 V n) η Q)dt n t 29

30 where the empirical distribution without particle n is denoted by η n) t := m n δ Xt m. 1 1 The estimation of i t is in fact easier than the estimation of [ γ Q), γ Q) ] t and these two increasing processes are equal up to a martingale term. is a local martin- Lemma The process t [ γ Q), γ Q) ] t i t gale. Proof. From 3.1) and Lemma 3.9, we know that [ γ Q), γ Q) ] t p u ) 2dAu p u ) 2d [ M, M ]u 3.25) is a local martingale. The result is then a direct consequence of 3.14). The next step is just a reformulation of i t Lemma The increasing process i t through an integration by parts. can be decomposed as i t = p t γ ) t Q 2 γ ) Q 2 + [ γt Q) ] 2 ln p t 2 γ u Q2 )dp u ) 1 + m t + l t + O, where is a local martingale, and m t := 1 ) p 2 u d M u l t := ln p u d γ u Q) )2. Proof. Starting from 3.24), we apply Lemma 3.12 to get i t = p u dγ u Q 2 ) Using 3.15), we are led to V η u Q)p u dp u + m t + 1 p u ) 2dRu. p u ) 2dRu 14 ϕ )2i 7 ϕ 2, i= 3

31 We claim now that a first timewise integration by parts yields p ) u dγ u Q 2 = γ ) u Q 2 dp u + γ ) t Q 2 p t γ Q 2 ) + O 1 Indeed, Corollary 3.2 ensures that γτ j Q 2 ) 2 ϕ 2 /, so that Condition i) of Lemma 4.7 is satisfied with zt = γt Q 2 ) and integration by parts rule 4.3) can therefore be applied. ext, remarking that V η u Q)p u dp u = γ u Q 2 ) dp u a second timewise integration by parts yields γ u Q)) 2 p u ) 1 dp u = γ u Q))2 d log p u + O 1 ) = [ γ t Q) ] 2 ln p t Indeed, Assumption A ) also implies, for 2, γ τ j γ u Q)) 2 p u ) 1 dp u, ). ln p u d γ u Q) )2 + O 1 ). Q) 2 ϕ 1 1/) j and γ τ j Q) 6 ϕ 1 1/)j 1. As a consequence, Conditions ii) and iii) of Lemma 4.7 are satisfied for zt = γt Q) )2, so that we can apply successively rules 4.4) and 4.5) of Lemma 4.7. Finally, putting all estimates together gives the desired result. Lemma One has E [ m t ) 2 ] = O1/) as well as E l t = O 1/ ). Proof. The first assertion is an immediate consequence of Itô s isometry for martingales, together with 3.19). For the second one, Itô s formula yields l t = 2 Therefore, E [ t l t 2E ln p u γ u Q)dγ u Q) ln p u d [ γ Q), γ Q) ] u. ] [ ln p u γ u Q)dγ u Q) +E ln p u d [ γ Q), γ Q) ] ]. u 31

32 Then, Cauchy-Schwarz inequality and Itô s isometry provide E [ t l t 2 E ln p u γ u Q)) 2 [ d γ Q), γ Q) ] u [ + E ln p u d [ γ Q), γ Q) ] ]. u ]) 1/2 Since p 2 ln p 1 for any p, 1], we have ln p u γ u Q)) 2 = ln p u p u η u Q)) 2 ln p u ϕ 2. Hence, if we denote [ T c) := E ln p u d [ γ Q), γ Q) ] ], u it comes E l t 2 ϕ c) + c). ext, the basic decomposition of Lemma 3.7 yields d [ γ Q), γ Q) ] t = 1 ) p 2 t d [ M, M ] t + 1 ) p 2 t d [ M, M ] t, so that the orthogonality property of Lemma 3.9 allows us to reformulate c) as c) = 1 [ T ] E ) ln p u p 2 u da u + d [ M, M ] u ). Using the fact that p ln p 1 together with 3.13), this yields c) 1 [ T ] E p t da t ϕ )j. Finally 3.18) gives c) 5 ϕ 2 and the proof is complete. j Convergence of i t For the forthcoming calculations we recall that p t V ηt Q T t ϕ) ) [ = γ t Q T t ϕ) ) ] [ 2 p 1 t γt Q T t ϕ) ) ] 2 [ = γ t Q T t ϕ) ) ] 2 p 1 t γ T ϕ)) 2 = γ ) t Q 2 p 1 t γ T ϕ)) 2. The asymptotic variance formula will be denoted as follows: 32

33 Definition For any t [, T ] and any ϕ D, let us define i t ϕ) :=p t γ t Q 2 ) γ Q 2 ) + [ γ t Q) ] 2 ln p t 2 γ u Q 2 ) dp u. 3.26) Our next purpose is to show that i t ϕ) corresponds to the asymptotic variance of interest, as suggested by Lemma Proposition 3.2. For any t [, T ], one has i t P i tϕ). Proof. By Lemma 3.17 and the relation γ t Q T t ϕ) ) = γ t Q) = γ T ϕ), we can write i t i t ϕ) = p t γ ) t Q 2 p t γ )) t Q 2 γ ) Q 2 γ )) Q 2 ) + γ t Q) 2 ln p t γ t Q) 2 ln p t + m t + l t + O1/) ) 2 γ u Q2 )dp u γ u Q 2 )dp u. Clearly, by Proposition 3.13 and Lemma 3.18, the boundary terms and the rest terms all tend to in probability. So we just have to show that γ u Q2 )dp u goes to as well, where we have defined a t := γ u Q 2 )dp u = a t γ u Q2 )dp u p u ) and b t := + b t γ u Q2 ) γ u Q 2 ) ) dp u. The convergence of b P t is a direct consequence of Proposition The proof of a P t requires more attention. Since γτ j Q 2 ) 2 ϕ 2 / by Corollary 3.2, the timewise integration by parts rule 4.3) of Lemma 4.7 enables us to rewrite the first term as a t = p u p u) d γ u Q 2 ) + γ t Q 2 )p t p t ) + O1/), where we have used that p = p = 1. Since γt Q 2 ) is bounded, the boundary term goes to by Proposition For the integral term, equation 3.17) leads to the decomposition p u p u) d γ u Q 2 ) = p u p u)p u da u + 1 p u p u)p u d M u. 3.27) 33

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Nonlinear representation, backward SDEs, and application to the Principal-Agent problem Ecole Polytechnique, France April 4, 218 Outline The Principal-Agent problem Formulation 1 The Principal-Agent problem

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Cores for generators of some Markov semigroups

Cores for generators of some Markov semigroups Cores for generators of some Markov semigroups Giuseppe Da Prato, Scuola Normale Superiore di Pisa, Italy and Michael Röckner Faculty of Mathematics, University of Bielefeld, Germany and Department of

More information

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have

Solution for Problem 7.1. We argue by contradiction. If the limit were not infinite, then since τ M (ω) is nondecreasing we would have 362 Problem Hints and Solutions sup g n (ω, t) g(ω, t) sup g(ω, s) g(ω, t) µ n (ω). t T s,t: s t 1/n By the uniform continuity of t g(ω, t) on [, T], one has for each ω that µ n (ω) as n. Two applications

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Stochastic flows associated to coalescent processes

Stochastic flows associated to coalescent processes Stochastic flows associated to coalescent processes Jean Bertoin (1) and Jean-François Le Gall (2) (1) Laboratoire de Probabilités et Modèles Aléatoires and Institut universitaire de France, Université

More information

Krzysztof Burdzy Robert Ho lyst Peter March

Krzysztof Burdzy Robert Ho lyst Peter March A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Pathwise construction of tree-valued Fleming-Viot processes

Pathwise construction of tree-valued Fleming-Viot processes Pathwise construction of tree-valued Fleming-Viot processes Stephan Gufler November 9, 2018 arxiv:1404.3682v4 [math.pr] 27 Dec 2017 Abstract In a random complete and separable metric space that we call

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19

Introductory Analysis I Fall 2014 Homework #9 Due: Wednesday, November 19 Introductory Analysis I Fall 204 Homework #9 Due: Wednesday, November 9 Here is an easy one, to serve as warmup Assume M is a compact metric space and N is a metric space Assume that f n : M N for each

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations

Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations Convergence exponentielle uniforme vers la distribution quasi-stationnaire en dynamique des populations Nicolas Champagnat, Denis Villemonais Colloque Franco-Maghrébin d Analyse Stochastique, Nice, 25/11/2015

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

Part III Stochastic Calculus and Applications

Part III Stochastic Calculus and Applications Part III Stochastic Calculus and Applications Based on lectures by R. Bauerschmidt Notes taken by Dexter Chua Lent 218 These notes are not endorsed by the lecturers, and I have modified them often significantly

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

1. Stochastic Process

1. Stochastic Process HETERGENEITY IN QUANTITATIVE MACROECONOMICS @ TSE OCTOBER 17, 216 STOCHASTIC CALCULUS BASICS SANG YOON (TIM) LEE Very simple notes (need to add references). It is NOT meant to be a substitute for a real

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI

STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI STOCHASTIC CALCULUS JASON MILLER AND VITTORIA SILVESTRI Contents Preface 1 1. Introduction 1 2. Preliminaries 4 3. Local martingales 1 4. The stochastic integral 16 5. Stochastic calculus 36 6. Applications

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Importance splitting for rare event simulation

Importance splitting for rare event simulation Importance splitting for rare event simulation F. Cerou Frederic.Cerou@inria.fr Inria Rennes Bretagne Atlantique Simulation of hybrid dynamical systems and applications to molecular dynamics September

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Banach Spaces V: A Closer Look at the w- and the w -Topologies

Banach Spaces V: A Closer Look at the w- and the w -Topologies BS V c Gabriel Nagy Banach Spaces V: A Closer Look at the w- and the w -Topologies Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we discuss two important, but highly non-trivial,

More information

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS

ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS ONLINE APPENDIX TO: NONPARAMETRIC IDENTIFICATION OF THE MIXED HAZARD MODEL USING MARTINGALE-BASED MOMENTS JOHANNES RUF AND JAMES LEWIS WOLTER Appendix B. The Proofs of Theorem. and Proposition.3 The proof

More information

Problem List MATH 5143 Fall, 2013

Problem List MATH 5143 Fall, 2013 Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information