CONTINUOUS-STATE BRANCHING PROCESSES
|
|
- Harold Snow
- 5 years ago
- Views:
Transcription
1 CONTINUOUS-STATE BRANCHING PROCESSES Zenghu Li (April 2, 218)
2 Zenghu Li School of Mathematical Sciences Beijing Normal University Beijing 1875, China URL: lizh/ Title: Continuous-state branching processes Mathematics Subject Classification (21): 6J8, 6J85, 6H1, 6H2
3 Contents Introduction 1 1 Laplace transforms of measures 3 2 Construction of CB-processes 6 3 Some basic properties 17 4 Positive integral functionals 26 5 Construction of CBI-processes 31 6 Martingale problem formulations 36 7 Stochastic equations for CBI-processes 43 8 Local and global maximal jumps 53 9 Reconstructions of the processes 57 References 65 i
4 ii
5 Introduction Continuous-state branching processes (CB-processes) and continuous-state branching processes with immigration (CBI-processes) constitute important classes of Marko processes taking values in the positive real line. They were introduced as probabilistic models describing the evolution of large populations with small individuals. The study of CB-processes was initiated by Feller (1951), who noticed that a branching diffusion process may arise in a limit theorem of Galton Watson discrete branching processes; see also Aliev and Shchurenkov (1982), Grimvall (1974) and Lamperti (1967a). A characterization of CB-processes by random time changes of Lévy processes was given by Lamperti (1967b). The convergence of rescaled discrete branching processes with immigration to CBI-processes was studied in Aliev (1985), Kawazu and Watanabe (1971) and Li (26). From a mathematical point of view, the continuous-state processes are usually easier to deal with because both their time and state spaces are smooth, and the distributions that appear are infinitely divisible. For general treatments and backgrounds of CB- and CBI-processes, the reader may refer to Kyprianou (214) and Li (211). More complicated probabilistic population models involving competition mechanism were studied in Pardoux (216), which extend the stochastic logistic growth model of Lambert (25). A continuous CBI-process with subcritical branching mechanism was used by Cox et al. (1985) to describe the evolution of interest rates and it has been known in mathematical finance as the Cox Ingersoll Ross model (CIR-model). Compared with other financial models introduced before, the CIR-model is more appealing as it is positive and mean-reverting. The asymptotic behavior of the estimators of the parameters in this model was studied by Overbeck and Rydén (1997); see also Li and Ma (215). Applications of stochastic calculus to finance including those of the CIRmodel were discussed systematically in Lamberton and Lapeyre (1996). A natural generalization of the CBI-process is the so-called affine Markov process, which has also been used a lot in mathematical finance; see Bernis and Scotti (218+), Duffie et al. (23) and the references therein. The approach of stochastic equations has been proved useful in the recent developments in the theory and applications of CB- and CBI-processes. A flow of CB-processes was constructed in Bertoin and Le Gall (26) by weak solutions to a jump-type stochastic equation. The strong existence and uniqueness for a stochastic equation of general CBI-processes were established in Dawson and Li (26). The 1
6 results of Bertoin and Le Gall (26) were extended to flows of CBI-processes in Dawson and Li (212) using strong solutions to a stochastic equation driven by timespace noises. For the stable branching CBI-process, a strong stochastic differential equation was established in Fu and Li (21). Based on the stochastic equations established in Dawson and Li (26), some characterizations are given in He and Li (216) for the distributional properties of large jumps of the CB- and CBI-processes. The purpose of these notes is to provide a brief introduction to CBI-processes accessible to graduate students with reasonable background in probability theory and stochastic processes. In particular, we give a quick development of the stochastic equations of the processes. The proofs given here are more elementary than those appearing in the literature before. We have made them readable without requiring too much preliminary knowledge on branching processes and stochastic analysis. In Section 1, we review some properties of Laplace transforms of finite measures on the positive half line. In Section 2, a construction of CB-processes is given as the rescaling limits of Galton Watson branching processes. This approach also gives the physical interpretation of the CB-processes. Some basic properties of the processes are developed in Section 3. The Laplace transforms of some positive integral functionals are characterized explicitly in Section 4. In Section 5, we construct the CBI-processes by scaling limits of Galton Watson branching processes with immigration. Several equivalent formulations of martingale problems for the CBI-processes are given in Section 6. From those martingale problems we derive stochastic equations of the processes in Section 7. Using those equations, characterizations of the local and global maximal jumps in the CB- and CBI-processes are given in Section 8. In Section 9, we present some reconstructions of the models by Poisson random measures determined by entrance laws. Those reveal the structures of the trajectories of the CB- and CBI-processes. 2
7 1 Laplace transforms of measures Let B[, ) be the Borel σ-algebra on the positive half line [, ). Let B[, ) = bb[, ) be the set of bounded Borel functions on [, ). Given a finite measure µ on [, ), we define the Laplace transform L µ of µ by L µ (λ) = e λx µ(dx), λ. (1.1) [, ) Theorem 1.1 A finite measure on [, ) is uniquely determined by its Laplace transform. Proof. Suppose that µ 1 and µ 2 are finite measures on [, ) and L µ1 (λ) = L µ2 (λ) for all λ. Let K = {x e λx : λ } and let L be the class of functions F B[, ) so that F (x)µ 1 (dx) = F (x)µ 2 (dx). [, ) [, ) Then K is closed under multiplication and L is a monotone vector space containing K. It is easy to see σ(k ) = B[, ). Then the monotone class theorem implies L bσ(k ) = B[, ). That proves the desired result. Theorem 1.2 Let {µ n } be a sequence of finite measures on [, ) and λ L(λ) a continuous function on [, ). If lim n L µn (λ) = L(λ) for every λ, then there is a finite measure µ on [, ) such that L µ = L and lim n µ n = µ by weak convergence. Proof. We can regard each µ n as a finite measure on [, ], the one-point compactification of [, ). Let F n denote the distribution function of µ n. By applying Helly s theorem one can see that any subsequence of {F n } contains a weakly convergent subsequence {F nk }. Then the corresponding subsequence {µ nk } converges weakly on [, ] to a finite measure µ. It follows that µ[, ] = lim k µ nk [, ] = lim k µ nk [, ) = lim k L µnk () = L(). Moreover, for λ > we have e λx µ(dx) = lim k [, ] [, ] 3 e λx µ nk (dx)
8 = lim e λx µ nk (dx) = L(λ), (1.2) k [, ) where e λ = by convention. By letting λ + in (1.2) and using the continuity of L at λ = we find µ[, ) = L() = µ[, ], so µ is supported by [, ). By Theorem 1.7 of Li (211, p.4) we have lim n µ nk = µ weakly on [, ). It is easy to see that (1.2) in fact holds for all λ, so L µ = L. By a standard argument one sees lim n µ n = µ weakly on [, ). Corollary 1.3 Let µ 1, µ 2,... and µ be finite measures on [, ). Then µ n µ weakly if and only if L µn (λ) L µ (λ) for every λ. Proof. If µ n µ weakly, we have lim n L µn (λ) = L µ (λ) for every λ by dominated convergence. The converse assertion is a consequence of Theorem 1.2. Given two probability measures µ 1 and µ 2 on [, ), we denote by µ 1 µ 2 their product measure on [, ) 2. The image of µ 1 µ 2 under the mapping (x 1, x 2 ) x 1 +x 2 is called the convolution of µ 1 and µ 2 and is denoted by µ 1 µ 2, which is a probability measure on [, ). According to the definition, for any F B[, ) we have F (x)(µ 1 µ 2 )(dx) = µ 1 (dx 1 ) F (x 1 + x 2 )µ 2 (dx 2 ).(1.3) [, ) [, ) [, ) Clearly, if ξ 1 and ξ 2 are independent random variables with distributions µ 1 and µ 2 on [, ), respectively, then the random variable ξ 1 + ξ 2 has distribution µ 1 µ 2. It is easy to show that L µ1 µ 2 (λ) = L µ1 (λ)l µ2 (λ), λ. (1.4) Let µ = δ and define µ n = µ (n 1) µ inductively for integers n 1. We say a probability distribution µ on [, ) is infinitely divisible if for each integer n 1, there is a probability µ n such that µ = µ n n. In this case, we call µ n the n-th root of µ. A positive random variable ξ is said to be infinitely divisible if it has infinitely divisible distribution on [, ). Write ψ I if λ ψ(λ) is a positive function on [, ) with the Lévy Khintchine representation: ψ(λ) = hλ + (1 e λu )l(du), (1.5) (, ) 4
9 where h and l(du) is a σ-finite measure on (, ) satisfying (1 u)l(du) <. (, ) The relation ψ = log L µ establishes a one-to-one correspondence between the functions ψ I and infinitely divisible probability measures µ on [, ); see, e.g., Theorem 1.39 in Li (211, p.2). 5
10 2 Construction of CB-processes Let {p(j) : j N} be a probability distribution on the space of positive integers N := {, 1, 2,...}. It is well-known that {p(j) : j N} is uniquely determined by its generating function g defined by g(z) = p(j)z j, z 1. j= To avoid triviality, we assume p() = g() < 1. Suppose that {ξ n,i : n, i = 1, 2,...} is a family of N-valued i.i.d. random variables with distribution {p(j) : j N}. Given an N-valued random variable x() independent of {ξ n,i }, we define inductively x(n) = x(n 1) i=1 ξ n,i, n = 1, 2,.... (2.1) Here we understand i=1 =. Let {Q(i, j) : j N} denote the i-fold convolution of {p(j) : j N}. For any n 1 and {i,, i n 1, j} N it is easy to see that P ( x(n) = j ) ( ) x(n 1) = i n 1 = P x(n) = j x() = i,, x(n 1) = i n 1 ( i n 1 = P k=1 ) ξ n,k = j = Q(i n 1, j). Then {x(n) : n } is an N-valued Markov chain with one-step transition matrix Q = (Q(i, j) : i, j N). The random variable x(n) can be thought of as the number of individuals in generation n of an evolving particle system. After one unit time, each of the particles splits independently of others into a random number of offspring according to the distribution {p(j) : j N}. Clearly, we have Q(i, j)z j = g(z) i, i =, 1, 2,..., z 1. (2.2) j= The transition matrix Q satisfies the branching property: Q(i 1 + i 2, ) = Q(i 1, ) Q(i 2, ), i 1, i 2 =, 1, 2,..., (2.3) where denotes the convolution operation. This means that different individuals in the population propagate independently each other. By a general result in the 6
11 theory of Markov chains, for any n 1 the n-step transition matrix of the GWprocess is just the n-fold product Q n = (Q n (i, j) : i, j N). Proposition 2.1 For any n 1 and i N we have Q n (i, j)z j = g n (z) i, z 1, (2.4) j= where g n (z) is defined by g n (z) = g g (n 1) (z) = g(g (n 1) (z)) successively with g (z) = z by convention. Proof. From (2.2) we know (2.4) holds for n = 1. Now suppose that (2.4) holds for some n 1. We have Q n+1 (i, j)z j = Q(i, k)q n (k, j)z j j= = j= k= Q(i, k)g n (z) k = g (n+1) (z) i. k= Then (2.4) also holds when n is replaced by n + 1. That proves the result by induction. If g (1 ) <, by differentiating both sides of (2.4) we see the first moment of the distribution {Q n (i, j) : j N} is given by jq n (i, j) = ig (1 ) n. (2.5) j=1 We call any positive integer-valued Markov chain with transition matrix given by (2.2) or (2.4) a Galton Watson branching process (GW-process); see, e.g., Athreya and Ney (1972) and Harris (1963). It is easy to see that is a trap for the GWprocess. Example 2.1 Given a GW-process {x(n) : n }, we can define its extinction time τ = inf{n : x(n) = }. In view of (2.1), we have x(n) = on the event {n τ }. Let q = P(τ < x() = 1) be the extinction probability. By the independence of the propagation of different individuals we have 7
12 P(τ < x() = i) = q i for any i =, 1, 2,.... By the total probability formula, q = P(x(1) = i x() = 1)P(τ < x() = 1, x(1) = i) = = i=1 P(ξ 1,1 = i)p(τ < x(1) = i) i=1 P(ξ 1,1 = i)q i = g(q). i=1 Then the extinction probability q is a solution of the equation z = g(z) on [, 1]. Now suppose we have a sequence of GW-processes {x k (n) : n } with offspring distributions given by the probability generating functions g k, k = 1, 2,.... Let z k (n) = x k (n)/k for n and k 1. Then {z k (n) : n } is a Markov chain with state space E k := {, 1/k, 2/k,...} and n-step transition probability Q n k (x, dy) determined by e λy Q n k (x, dy) = g n k (e λ/k ) kx, E k λ. (2.6) Suppose that {γ k } is a positive sequence so that γ k as k. Let γ k t denote the integer part of γ k t. By (2.6) we have E k e λy Q γ kt k (x, dy) = exp{ xv k (t, λ)}, (2.7) where v k (t, λ) = k log g γ kt k (e λ/k ). (2.8) Clearly, if z k () = x E k, then Q γ kt k (x, ) is the distribution of ξ k (t) = z k ( γ k t ) on E k [, ). We are interested in the asymptotic behavior of the sequence of continuous time processes {ξ k (t) : t } = {z k ( γ k t ) : t } as k. By (2.8), for (i 1) t < γ 1i we have γ 1 k k v k (t, λ) = v k (γ 1 k (i 1), λ) = k log g (i 1) k (e λ/k ). 8
13 It follows that where v k (t, λ) = v k (, λ) + = λ k = λ k = λ γ 1 k = λ γ 1 k = λ It is simple to see that where γ k t i=1 γ k t i=1 γ k t γ k t i=1 [v k (γ 1 k i, λ) v k(γ 1 (i 1), λ)] k [log g i k (e λ/k ) log g (i 1) k (e λ/k )] i=1 γ k t i=1 γ 1 k γ kt log[g k (g (i 1) k (e λ/k ))g (i 1) k (e λ/k ) 1 ] Ḡ k ( k log g (i 1) k (e λ/k )) Ḡ k (v k (γ 1 (i 1), λ)) k Ḡ k (v k (s, λ))ds, (2.9) Ḡ k (z) = kγ k log [ g k (e z/k )e z/k], z. (2.1) Ḡ k (z) = kγ k log [ 1 + (kγ k ) 1 G k (z)e z/k], (2.11) G k (z) = kγ k [g k (e z/k ) e z/k ]. (2.12) Proposition 2.2 (i) We have lim k Ḡ k (z) G k (z) = uniformly on each bounded interval. (ii) The sequence {Ḡ k } is uniformly Lipschitz on each bounded interval if and only if so is {G k }. Proof. The first assertion follows immediately from (2.11). By the same relation we have Ḡ k (z) = [G k (z) + k 1 G k (z)]e z/k, z. 1 + (kγ k ) 1 G k (z)ez/k Then the sequence {Ḡ k } is uniformly bounded on each bounded interval if and only if so is {G k }. That gives the second assertion. 9
14 By the above proposition, if either {G k } or {φ k } is uniformly Lipschitz on each bounded interval, then they converge or diverge simultaneously and in the convergent case they have the same limit. For the convenience of statement of the results, we formulate the following conditions: Condition 2.3 The sequence {G k } is uniformly Lipschitz on [, a] for every a and there is a function φ on [, ) so that G k (z) φ(z) uniformly on [, a] for every a as k. Proposition 2.4 Suppose that Condition 2.3 is satisfied. Then the limit function φ has representation ( φ(z) = bz + cz 2 + e zu 1 + zu ) m(du), z, (2.13) (, ) where c and b are constants and m(du) is a σ-finite measure on (, ) satisfying (u u 2 )m(du) <. (, ) Proof. For each k 1 let us define the function φ k on [, k] by φ k (z) = kγ k [g k (1 z/k) (1 z/k)], z k. (2.14) From (2.12) and (2.14) we have and G k (z) = γ ke z/k [1 g k (e z/k )], z, φ k (z) = γ k[1 g k (1 z/k)], z k. Since {G k } is uniformly Lipschitz on each bounded interval, the sequence {G k } is uniformly bounded on each bounded interval. Then {φ k } is also uniformly bounded on each bounded interval. Since {G k }, and so the sequence {φ k } is uniformly Lipschitz on each bounded interval. Let a. By the mean-value theorem, for k a and z a we have [ G k (z) φ k (z) = kγ k gk (e z/k ) g k (1 z/k) e z/k + (1 z/k) ] = kγ k [g k (η k) 1](e z/k 1 + z/k), (2.15) 1
15 where 1 a/k 1 z/k η k e z/k 1. Choose k a so that e 2a/k 1 a/k. Then e 2a/k 1 a/k for k k and hence γ k g k (η k) 1 sup γ k g k (e z/k ) 1, k k. z 2a Since {G k } is uniformly bounded on [, 2a], the sequence {γ k g k (η k) 1 : k k } is bounded. Since lim k k(e z/k 1 + z/k) = uniformly on each bounded interval, we have lim k φ k (z) G k (z) = uniformly on each bounded interval. It follows that lim k φ k (z) = φ(z) uniformly on each bounded interval. Then the result follows by Corollary 1.46 in Li (211, p.26). Proposition 2.5 For any function φ with representation (2.13) there is a sequence {G k } in the form of (2.12) satisfying Condition 2.3. Proof. By Proposition 2.2 it suffices to construct a sequence {φ k } in the form of (2.14) uniformly Lipschitz on [, a] and φ k (z) φ(z) uniformly on [, a] for every a. To simplify the formulations we decompose the function φ into two parts. Let φ (z) = φ(z) bz. We first define γ,k = (1 + 2c)k + u(1 e ku )m(du) and (, ) g,k (z) = z + k 1 γ 1,k φ (k(1 z)), z 1. It is easy to see that z g,k (z) is an analytic function in ( 1, 1) satisfying g,k (1) = 1 and d n dz ng,k(), n. Therefore g,k ( ) is a probability generating function. Let φ,k be defined by (2.14) with (γ k, g k ) replaced by (γ,k, g,k ). Then φ,k (z) = φ (z) for z k. That completes the proof if b =. In the case b, we set g 1,k (z) = 1 (1 + b ) + 1 (1 b ) z 2. 2 b 2 b 11
16 Let γ 1,k = b and let φ 1,k (z) be defined by (2.14) with (γ k, g k ) replaced by (γ 1,k, g 1,k ). Thus we have φ 1,k (z) = bz + 1 2k ( b b)z2. Finally, let γ k = γ,k + γ 1,k and g k = γ 1 k (γ,kg,k + γ 1,k g 1,k ). Then the sequence φ k (z) defined by (2.14) is equal to φ,k (z) + φ 1,k (z) which satisfies the required condition. Lemma 2.6 Suppose that the sequence {G k } defined by (2.12) is uniformly Lipschitz on [, 1]. Then there are constants B, N such that v k (t, λ) λe Bt for every t, λ and k N. Proof. Let b k := G k (+) for k 1. Since {G k} is uniformly Lipschitz on [, 1], the sequence {b k } is bounded. From (2.12) we have b k = γ k [1 g k (1 )]. By (2.5) it is not hard to obtain yq γ kt k E k ( (x, dy) = xg k (1 ) γkt = x 1 b k γ k ) γkt. Let B be a constant such that 2 b k B for all k 1. Since γ k as k, there is N 1 so that ( 1 b k )γk B (1 + B )γk B e, k N. γ k 2γ k It follows that, for t and k N, yq γ kt k (x, dy) x exp E k { B } γ k t γk xe Bt. Then the desired estimate follows from (2.7) and Jensen s inequality. Theorem 2.7 Suppose that Condition 2.3 holds. Then for every a we have v k (t, λ) some v t (λ) uniformly on [, a] 2 as k and the limit function solves the integral equation v t (λ) = λ t φ(v s (λ))ds, λ, t. (2.16) 12
17 Proof. The following argument is a modification of that of Aliev and Shchurenkov (1982) and Aliev (1985). In view of (2.9), we can write where v k (t, λ) = λ + ε k (t, λ) t Ḡ k (v k (s, λ))ds, (2.17) ε k (t, λ) = ( t γ 1 k γ kt ) ( Ḡ k vk (γ 1 k γ kt, λ) ). By (2.12) it is simple to see that Ḡ k (z) = kγ k log { 1 + [g k (e z/k ) e z/k ]e z/k} = kγ k log [ 1 + (kγ k ) 1 G k (z)e z/k]. Then for any < ε 1 we can enlarge N 1 so that Ḡ k (z) G k (z) ε, z ae Ba, k N. By Condition 2.3, we can enlarge N 1 again so that It then follows that where For n k N let Ḡ k (z) φ(z) ε, z ae Ba, k N. (2.18) ε k (t, λ) γ 1 k M, t, λ a, (2.19) M = 1 + sup φ(z). z ae Ba K k,n (t, λ) = sup v n (s, λ) v k (s, λ). s t By (2.17), (2.18) and (2.19) we obtain t K k,n (t, λ) 2(γ 1 k M + εa) + L K k,n (s, λ)ds, t, λ a, where L = sup s ae Ba φ (z). By Gronwall s inequality, K k,n (t, λ) 2(γ 1 k M + εa) exp{lt}, t, λ a. Then v k (t, λ) some v t (λ) uniformly on [, a] 2 as k for every a. From (2.17) we get (2.16). 13
18 Theorem 2.8 Suppose that φ is given by (2.13). Then for any λ there is a unique locally bounded positive solution t v t (λ) to (2.16). Moreover, the solution satisfies the semigroup property v r+t (λ) = v r v t (λ) = v r (v t (λ)), r, t, λ. (2.2) Proof. By Proposition 2.5 and Theorem 2.7 there is a locally bounded positive solution to (2.16). The proof of the uniqueness of the solution is a standard application of Gronwall s inequality. The relation (2.2) follows from the uniqueness of the solution to (2.16). Proposition 2.9 For every t the function λ v t (λ) is strictly increasing on [, ). Proof. By the continuity of t v t (λ), for any λ > there is t > so that v t (λ ) > for t t. Then (2.21) implies Q t (x, {}) < 1 for x > and t t, and so λ v t (λ) is strictly increasing for t t. By the semigroup property of (v t ) t we infer λ v t (λ) is strictly increasing for all t. Theorem 2.1 Suppose that φ is given by (2.13). For any λ let t v t (λ) be the unique locally bounded positive solution to (2.16). Then we can define a Feller transition semigroup (Q t ) t on [, ) by e λy Q t (x, dy) = e xvt(λ), λ, x. (2.21) [, ) Proof. By Proposition 2.5, there is a sequence {G k } in the form of (2.12) satisfying Condition 2.3. By Theorem 2.7 we have v k (t, λ) v t (λ) uniformly on [, a] 2 as k for every a. Take x k E k satisfying x k x as k. By Theorem 1.2, there is a probability measure Q t (x, dy) on [, ) defined by (2.21) and lim k Q γ kt k (x k, ) = Q t (x, ) weakly. By a monotone class argument one can see that Q t (x, dy) is a kernel on [, ). The semigroup property of the family of kernels (Q t ) t follows from (2.2) and (2.21). For λ > and x set e λ (x) = e λx. We denote by D the linear span of {e λ : λ > }. By Proposition 2.9, the operator Q t preserves D for every t. By the continuity of t v t (λ) it is easy to show that t Q t e λ (x) is continuous for λ > 14
19 and x. Then t Q t f(x) is continuous for every f D and x. Let C [, ) be the space of continuous functions on [, ) vanishing at infinity. By the Stone Weierstrass theorem, the set D is uniformly dense in C [, ); see, e.g., Hewitt and Stromberg (1965, pp.98-99). Then each operator Q t preserves C [, ) and t Q t f(x) is continuous for every f C [, ) and x. That gives the Feller property of the semigroup (Q t ) t. A Markov process is called a continuous-state branching process (CB-process) with branching mechanism φ if it has transition semigroup (Q t ) t defined by (2.21). It is simple to see that Q t (x 1 + x 2, ) = Q t (x 1, ) Q t (x 2, ), t, x 1, x 2, (2.22) which is called the branching property of (Q t ) t. The family of functions (v t ) t is called the cumulant semigroup of the CB-process. Since (Q t ) t is a Feller semigroup, the CB-process has a Hunt realization; see, e.g., Chung (1982, p.75). Proposition 2.11 Suppose that {(x 1 (t), Ft 1) : t } and {(x 2(t), Ft 2) : t } are two independent CB-processes with branching mechanism φ. Let x(t) = x 1 (t) + x 2 (t) and F t = σ(ft 1 F t 2). Then {(x(t), F t) : t } is also a CB-processes with branching mechanism φ. Proof. Let t r and for i = 1, 2 let F i be a bounded positive F i r -measurable random variable. For any λ we have P [ F 1 F 2 e λx(t)] = P [ F 1 e λx 1(t) ] P [ F 2 e λx 2(t) ] = P [ F 1 e x 1(r)v t r (λ) ] P [ F 2 e x 2(r)v t r (λ) ] = P [ F 1 F 2 e x(r)v t r(λ) ]. Then {(x(t), F t ) : t } is a CB-processes with branching mechanism φ. Let D[, ) denote the space of positive càdlàg paths on [, ) furnished with the Skorokhod topology. The following theorem is a slight modification of Theorem 2.1 of Li (26), which gives a physical interpretation of the CB-process as the approximation of the GW-process. Theorem 2.12 Suppose that Condition 2.3 holds. Let {x(t) : t } be a CBprocess with transition semigroup (Q t ) t defined by (2.21). If z k () converges to x() in distribution, then {z k ( γ k t ) : t } converges as k to {x(t) : t } in distribution on D[, ). 15
20 Proof. For λ > and x set e λ (x) = e λx. Let C [, ) be the s- pace of continuous functions on [, ) vanishing at infinity. By (2.7), (2.21) and Theorem 2.7 it is easy to show lim sup Q γ k t k e λ (x) Q t e λ (x) =, λ >. k x E k Then the Stone Weierstrass theorem implies lim sup Q γ k t k f(x) Q t f(x) =, f C [, ). k x E k By Ethier and Kurtz (1986, p.226 and pp ) we conclude that {z k ( γ k t ) : t } converges to the CB-process {x(t) : t } in distribution on D[, ). For any w D[, ) let τ (w) = inf{s > : w(s) = }. Let D [, ) be the set of paths w D[, ) such that w(t) = for t τ (w). Then D [, ) is a Borel subset of D[, ). Clearly, the distributions of the processes in Theorem 2.12 are all supported by D [, ). By Theorem 1.7 of Li (211, p.4) we have the following: Corollary 2.13 Under the conditions of Theorem 2.12, the sequence {z k ( γ k t ) : t } converges as k to {x(t) : t } in distribution on D [, ). The convergence of rescaled Galton Watson branching processes to diffusion processes was first studied by Feller (1951). Lamperti (1967a) showed that CBprocesses are weak limits of rescaled Galton Watson branching processes. A characterization of CB-processes by random time changes of Lévy processes was given by Lamperti (1967b). We have followed Aliev and Shchurenkov (1982) and Li (26, 211) in some of the above calculations. 16
21 3 Some basic properties In this section we prove some basic properties of CB-processes. Most of the results presented here can be found in Grey (1974) and Li (2). We shall mainly follow the treatments in Li (211). Suppose that φ is a branching mechanism defined by (2.13). This is a convex function on [, ). In fact, it is easy to see that φ (z) = b + 2cz + u ( 1 e zu) m(du), z, (3.1) (, ) which is an increasing function. In particular, we have φ () = b. The limit φ( ) := lim z φ(z) exists in [, ] and φ ( ) := lim z φ (z) exists in (, ]. In particular, we have φ ( ) := b + 2c + u m(du) (3.2) (, ) with = by convention. Observe also that φ( ) if and only if φ ( ), and φ( ) = if and only if φ ( ) >. The transition semigroup (Q t ) t of the CB-process is defined by (2.16) and (2.21). From the branching property (2.22), it is easy to see that the probability measure Q t (x, ) is infinitely divisible. Then (v t ) t can be expressed canonically as v t (λ) = h t λ + (1 e λu )l t (du), t, λ, (3.3) (, ) where h t and l t (du) is a σ-finite measure on (, ) satisfying (1 u)l t (du) <. (, ) By differentiating both sides of (3.3) and using (2.16) it is easy to find h t + ul t (du) = λ v t(+) = e bt, t. (3.4) (, ) Then we infer that l t (du) is a σ-finite measure on (, ) satisfying ul t (du) <. (, ) 17
22 From (2.21) and (3.4) we get yq t (x, dy) = xe bt, t, x. (3.5) [, ) We say the CB-process is critical, subcritical or supercritical according as b =, b or b, respectively. From (2.16) we see that t v t (λ) is first continuous and then continuously differentiable. Moreover, it is easy to show that t v t(λ) = φ(λ), λ. t= By the semigroup property v t+s = v s v t = v t v s we get the backward differential equation t v t(λ) = s v s(v t (λ)) = φ(v t (λ)), v (λ) = λ, (3.6) s= and forward differential equation t v t(λ) = s v t(v s (λ)) = φ(λ) s= λ v t(λ), v (λ) = λ. (3.7) Proposition 3.1 For any t and λ let v t (λ) = ( / λ)v t(λ). Then we have { t } v t (λ) = exp φ (v s (λ))ds, (3.8) where φ (z) is given by (3.1). Proof. Based on (2.16) and (3.6) it is elementary to see that It follows that t t v t (λ) = φ (v t (λ))v t (λ) = λ t v t(λ). [ log v t (λ)] = v t (λ) 1 t v t (λ) = φ (v t (λ)). Then we have (3.8) since v (λ) = 1. 18
23 Proposition 3.2 Suppose that λ > and φ(λ). Then the equation φ(z) = has no root between λ and v t (λ) for every t. Moreover, we have λ v t (λ) φ(z) 1 dz = t, t. (3.9) Proof. By (2.13) we see φ() = and z φ(z) is a convex function. Since φ(λ) for some λ > according to the assumption, the equation φ(z) = has at most one root in (, ). Suppose that λ is a root of φ(z) =. Then (3.7) implies v t (λ ) = λ for all t. By Proposition 2.9 we have v t (λ) > λ for λ > λ and < v t (λ) < λ for < λ < λ. Then λ > and φ(λ) imply there is no root of φ(z) = between λ and v t (λ). From (3.6) we get (3.9). Corollary 3.3 Suppose that φ(z ) for some z > and let θ = inf{z > : φ(z) } with the convention inf =. Then lim t v t (λ) = θ for every λ >. Proof. If θ <, we have clearly φ(θ ) =, φ(z) < for z (, θ ) and φ(z) > for z (θ, ). From (3.6) we see v t (θ ) = θ for all t. Then (3.9) implies that lim t v t (λ) = θ increasingly for λ (, θ ) and decreasingly λ (θ, ). Corollary 3.4 Suppose that φ(z ) for some z >. Then for any x > we have lim t Q t(x, ) = e xθ δ + (1 e xθ )δ by weak convergence of probability measures on [, ]. Proof. Since [, ] is a compact space, the family {Q t (x, ) : t } is relatively compact. Let {t n } (, ) be any sequence so that t n and Q tn (x, ) some Q (x, ) as n. By (2.21) and Corollary 3.3, for every λ > we have e λy Q (x, dy) = e xθ. It follows that [, ] Q (x, {}) = lim λ [, ] e λy Q (x, dy) = e xθ 19
24 and Q (x, { }) = lim (1 e λy )Q (x, dy) = 1 e xθ. λ [, ] Then we have Q (x, ) = e xθ δ + (1 e xθ )δ, which is independent of the particular choice of the sequence {t n }. Then we have the desired result. Clearly, we have θ = if and only if φ(z) < for all z > and this extremal case is included in the two corollaries above. Since (Q t ) t is a Feller transition semigroup, the CB-process has a Hunt process realization X = (Ω, F, F t, x(t), Q x ); see, e.g., Chung (1982, p.75). Let τ := inf{s > : x(s) = } denote the extinction time of the CB-process. Theorem 3.5 For every t the limit v t = lim λ v t (λ) exists in (, ]. Moreover, the mapping t v t is decreasing and for any t and x > we have Q x {τ t} = Q x {x(t) = } = exp{ x v t }. (3.1) Proof. By Proposition 2.9 the limit v t = lim λ v t (λ) exists in (, ] for every t. For t r we have v t = lim λ v r (v t r (λ)) = v r ( v t r ) v r. (3.11) Since zero is a trap for the CB-process, we get (3.1) by letting λ in (2.21). For the convenience of statement of the results in the sequel, we formulate the following condition on the branching mechanism, which is known as Grey s condition: Condition 3.6 There is some constant θ > so that φ(z) > for z θ and θ φ(z) 1 dz <. Theorem 3.7 We have v t < for some and hence all t > if and only if Condition 3.6 holds. Proof. By (3.11) it is simple to see that v t = lim λ v t (λ) < for all t > if and only if this holds for some t >. If Condition 3.6 holds, we can let λ in (3.9) to obtain v t φ(z) 1 dz = t (3.12) 2
25 and hence v t < for t >. For the converse, suppose that v t < for some t >. By (3.6) there exists some θ > so that φ(θ) >, for otherwise we would have v t v t (λ) λ, yielding a contradiction. Then φ(z) > for all z θ by the convexity of the branching mechanism. As in the above we see that (3.12) still holds, so Condition 3.6 is satisfied. Theorem 3.8 Let v = lim t v t [, ]. Then for any x > we have Q x {τ < } = exp{ x v}. (3.13) Moreover, we have v < if and only if Condition 3.6 holds, and in this case v is the largest root of φ(z) =. Proof. The first assertion follows immediately from Theorem 3.5. By Theorem 3.7 we have v t < for some and hence all t > if and only if Condition 3.6 holds. This is clearly equivalent to v <. From (3.12) it is easy to see that v is the largest root of φ(z) =. Corollary 3.9 Suppose that Condition 3.6 holds. Then for any x > we have Q x {τ < } = 1 if and only if b. By Corollary 3.3 and Theorem 3.8 we see θ v. In particular, we have θ < v = if there is some constant θ > so that φ(z) > for z θ and Under Condition 3.6, we have θ = v <. θ φ(z) 1 dz =. Theorem 3.1 Let φ (z) = φ (z) b for z. We can define a Feller transition semigroup (Q b t ) t on [, ) by { t } e λy Q b t (x, dy) = exp xv t (λ) φ (v s(λ))ds, λ (3.14). [, ) Proof. It is easy to check that Q b t (x, dy) := ebt x 1 yq t (x, dy) defines a Markov transition semigroup (Q b t ) t on (, ). Let q t (λ) = e bt v t (λ) and let q t (λ) = ( / λ)q t (λ). By differentiating both sides of (2.21) we see e λy Q b t (x, dy) = exp{ xv t(λ)}q t (λ), λ. (, ) 21
26 From (3.3) and (3.8) we have ] q t [h (λ) = ebt t + e λu ul t (du) (, ) { = exp t } φ (v s(λ))ds. Then we can extend (Q b t ) t to a Markov transition semigroup on [, ) by setting Q b t (, dy) = ebt [h t δ (dy) + y1 {y>} l t (dy)], y. (3.15) The Feller property of the semigroup is immediate by (3.14). Recall that zero is a trap for the CB-process. Let (Q t ) t be the restriction to (, ) of the semigroup (Q t ) t. For a σ-finite measure η on (, ) we define ηq t (dy) = η(dx)q t (x, dy), t, y >. (, ) A family of σ-finite measures (κ t ) t> on (, ) is called an entrance law for (Q t ) t if κ r Q t = κ r+t for all r, t >. Theorem 3.11 (1) If φ ( ) =, the cumulant semigroup admits representation (3.3) with h t = for all t >, that is, v t (λ) = (1 e λu )l t (du), t >, λ, (3.16) (, ) where (l t ) t> is an entrance law for (Q t ) t. (2) If δ := φ ( ) <, then t v t (λ) = e δt λ + e δs ds (, ) (1 e uvt s(λ) )m(du), t, λ, (3.17) that is, the cumulant semigroup admits the representation (3.3) with h t = e δt, l t = t e δs mq t sds, t. (3.18) Proof. (1) If Condition 3.6 holds, we have v t = v t ( ) < for t > by Theorem 3.7, so h t = by (3.3). If Condition 3.6 does not hold, we have v t = by Theorem 3.7. By differentiating both sides of (3.3) we get v t (λ) = h t + ue λu l t (du). (, ) 22
27 Since φ ( ) =, by (3.8) we see h t = v t ( ) = for t >. In view of (2.2), for r, t > and λ we have (1 e λu )l r+t (du) = (1 e uvt(λ) )l r (du) (, ) (, ) = l r (dx) (1 e λu )Q t (x, du). (, ) Then (l t ) t> is an entrance law for (Q t ) t. (, ) (2) If δ := φ ( ) <, we must have c =. Moreover, we can write the branching mechanism into φ(λ) = δλ + Then we can use (2.16) and integration by parts to see That gives (3.17). v t (λ)e δt = λ + = λ + t t (e λz 1)m(dz), λ. (3.19) t δv s (λ)e δs ds φ(v s (λ))e δs ds e δs ds (1 e uvs(λ) )m(du). (, ) Corollary 3.12 If Condition 3.6 holds, the cumulant semigroup admits the representation (3.16) and t v t = l t (, ) is the unique solution to the differential equation dt v d t = φ( v t ), t > (3.2) with singular initial condition v + =. Proof. Under Condition 3.6, for every t > we have v t < by Theorem 3.7. Moreover, the condition and the convexity of z φ(z) imply φ ( ) =. Then we have the representation (3.16) by Theorem The semigroup property of (v t ) t implies v t+s = v t ( v s ) for t > and s >. Then t v t satisfies (3.2). From (3.12) it is easy to see v + =. Suppose that t u t and t v t are two solutions to (3.2) with u + = v + =. For any ε > there exits δ > so that u s v ε for every < s δ. Since both t u s+t and t v ε+t are solutions to (3.2), we have u s+t v ε+t for t by Proposition 2.9. Then 23
28 we can let s and ε to see u t v t for t >. By symmetry we get the uniqueness of the solution. In the case of φ ( ) =, from (3.16) and (2.21) we see, for t > and λ, (1 e yλ )l t (dy) = lim (1 e yλ )Q t (x, dy). (, ) Then, formally, x x 1 (, ) l t = lim x x 1 Q t (x, ). (3.21) Under Condition 3.6, the above relation holds rigorously by the convergence of finite measures on (, ). In Theorem 3.11 one usually cannot extend (l t ) t> to a σ-finite entrance law for the semigroup (Q t ) t on [, ). For example, let us assume Condition 3.6 holds and ( l t ) t> is such an extension. For any < r < ε < t we have lt ({}) Q t r (x, {})l r(dx) e x v t ε l r (dx) (, ) (, ) = l r (, ) (1 e u v t ε )l r (du) = v r v r ( v t ε ). (, ) The right-hand side tends to infinity as r. Then l t (dx) cannot be a σ-finite measure on [, ). Example 3.1 For any δ 1 the function φ(λ) = λ 1+δ can be represented in the form of (2.13). In particular, for < δ < 1 we can use integration by parts to see (e λu 1 + λu) du (, ) u 2+δ Thus we have λ 1+δ = δ(1 + δ) Γ(1 δ) = λ1+δ (e v 1 + v) dv (, ) v 2+δ [ = λ 1+δ e v 1 + v (1 + δ)v + 1+δ = λ1+δ [ (1 e v ) δ δv + δ Γ(1 δ) = δ(1 + δ) λ1+δ. (, ) (, ) (, ) (1 e v )dv] (1 + δ)v 1+δ v dv ] e δv δ (e λu 1 + λu) du, λ. (3.22) u2+δ 24
29 Example 3.2 Suppose that there are constants c >, < δ 1 and b so that φ(z) = cz 1+δ + bz. Then Condition 3.6 is satisfied. Let qδ (t) = δt and By solving the equation we get q b δ (t) = b 1 (1 e δbt ), b. t v t(λ) = cv t (λ) 1+δ bv t (λ), v t (λ) = v (λ) = λ e bt λ [ 1 + cq b δ (t)λ δ], t, λ. (3.23) 1/δ Thus v t = c 1/δ e bt q b δ (t) 1/δ for t >. In particular, if δ = 1, then (3.16) holds with l t (du) = e bt { c 2 q exp 1 b(t)2 u } cq1 b(t) du, t >, u >. 25
30 4 Positive integral functionals In this section, we give some characterizations of a class of positive integral functionals of the CB-process. The corresponding results in the measure-valued setting can be found in Li (211). For our purpose, it is more convenient to start the process from an arbitrary initial time r. Let X = (Ω, F, F r,t, x(t), Q r,x ) a càdlàg realization of the CB-process. In view of (2.16) and (2.21), for any t r and λ we have Q r,x exp{ λx(t)} = exp{ xu r (λ)}, (4.1) where r u r (λ) is the unique bounded positive solution to u r (λ) + t r φ(u s (λ))ds = λ, r t. (4.2) Proposition 4.1 For {t 1 < < t n } [, ) and {λ 1,..., λ n } [, ) we have { n } Q r,x exp λ j x(t j )1 {r tj } = exp{ xu(r)}, r t n, (4.3) j=1 where r u(r) is a bounded positive function on [, t n ] solving tn n u(r) + φ(u(s))ds = λ j 1 {r tj }. (4.4) r Proof. We shall give the proof by induction in n 1. For n = 1 the result follows from (4.1) and (4.2). Now supposing (4.3) and (4.4) are satisfied when n is replaced by n 1, we prove they are also true for n. It is clearly sufficient to consider the case with r t 1 < < t n. By the Markov property, { Q r,x exp n j=1 } λ j x(t j ) j=1 = Q r,x exp{ x(t 1 )λ 1 x(t 1 )w(t 1 )}, where r w(r) is a bounded positive Borel function on [, t n ] satisfying tn n w(r) + φ(w(s))ds = λ j 1 {r tj }. (4.5) r 26 j=2
31 Then the result for n = 1 implies that { n } Q r,x exp λ j x(t j ) = exp{ xu(r)} j=1 with r u(r) being a bounded positive Borel function on [, t 1 ] satisfying u(r) + t1 r φ(u(r))ds = λ 1 + w(t 1 ). (4.6) Setting u(r) = w(r) for t 1 < r t n, from (4.5) and (4.6) one checks that r u(r) is a bounded positive solution to (4.4) on [, t n ]. Theorem 4.2 Suppose that t and µ is a finite measure supported by [, t]. Let s λ(s) be a bounded positive Borel function on [, t]. Then we have { } Q r,x exp λ(s)x(s)µ(ds) = exp{ xu(r)}, r t, (4.7) [r,t] where r u(r) is the unique bounded positive solution on [, t] of t u(r) + φ(u(s))ds = λ(s)µ(ds). (4.8) r Proof. Step 1. We first assume s λ(s) is continuous on [, t]. For any integer n 1 let µ n (ds) = µ(t γ n (k + 1), t γ n (k)]δ t γn (k)(ds), k= where γ n (k) = k/2 n. From Proposition 4.1 we see that { } Q r,x exp λ(s)x(s)µ n (ds) = exp{ xu n (r)}, (4.9) [r,t] where r u n (r) is a bounded positive solution on [, t] to t u n (r) + φ(u n (s))ds = λ(s)µ n (ds). (4.1) r For any s t let p n (s) = t γ n ([(t s)2 n ] + 1) and q n (s) = t γ n ([(t s)2 n ]), where [(t s)2 n ] denotes the integer part of (t s)2 n. Then we have s 2 n p n (s) < s q n (s) < s + 2 n. It is easy to see that λ(s)µ n (ds) = λ qn (s)µ(ds) + λ qn (r)µ(p n (r), r) [r,t] [r,t] 27 [r,t] [r,t]
32 and the second term on the right-hand side tends to zero as n. By the continuity of s λ(s) we have λ(s)µ n (ds) = λ(s)µ(ds). lim n lim n [r,t] [r,t] A similar argument shows that λ(s)x(s)µ n (ds) = [r,t] [r,t] λ(s)x(s)µ(ds). From (4.9) we see the limit u(r) = lim n u n (r) exists and (4.7) holds. It is not hard to show that {u n } is uniformly bounded on [, t]. Then we get (4.8) by letting n in (4.1). Step 2. Let B 1 be the set of bounded Borel functions s λ(s) for which there exist bounded positive solutions r u(r) of (4.8) such that (4.7) holds. Then B 1 is closed under bounded pointwise convergence. The result of the first step shows that B 1 contains all positive continuous functions on [, t]. By Proposition 1.3 in Li (211, p.3) we infer that B 1 contains all bounded Borel functions on [, t]. Step 3. To show the uniqueness of the solution to (4.8), suppose that r v(r) is another bounded positive Borel function on [, t] satisfying this equation. It is easy to find a constant K such that u(r) v(r) t r φ(u(s)) φ(v(s)) ds K We may rewrite the above inequality into u(t r) v(t r) K r t r u(s) v(s) ds. u(t s) v(t s) ds, r t, so Gronwall s inequality implies u(t r) v(t r) = for every r t. Suppose that µ(ds) is a locally bounded Borel measure on [, ) and s λ(s) is a locally bounded positive Borel function on [, ). We define the positive integral functional: A[r, t] := λ(s)x(s)µ(ds), t r. [r,t] By replacing λ(s) with θλ(s) in Theorem 4.2 for θ we get a characterization of the Laplace transform of the random variable A[r, t]. 28
33 Corollary 4.3 Let t be given. Let λ and let s θ(s) be a bounded positive Borel function on [, t]. Then for r t we have { t } Q r,x exp λx(t) θ(s)x(s)ds = exp{ xu(r)}, (4.11) where r u(r) is the unique bounded positive solution on [, t] of u(r) + t r r φ(u(s))ds = λ + t r θ(s)ds. (4.12) Proof. This follows by an application of Theorem 4.2 to the measure µ(ds) = ds + δ t (ds) and the function λ(s) = 1 {s<t} θ(s) + 1 {s=t} λ. Corollary 4.4 Let X = (Ω, F, F t, x(t), Q x ) be a Hunt realization of the CBprocess started from time zero. Then for t, λ, θ we have { t } Q x exp λx(t) θ x(s)ds = exp{ xu(t)}, (4.13) where t u(t) = u(t, λ, θ) is the unique positive solution of t u(t) = θ φ(u(t)), u = λ. (4.14) Proof. By Corollary 4.3 we have (4.13) with t u(t) defined by u(t) + t φ(u(s))ds = λ + tθ. By differentiating both sides of the above equation we get (4.14). The uniqueness of the solution follows by Gronwall s inequality. Recall that φ ( ) is given by (3.2). Under the condition φ ( ) >, we have φ(z) as z, so the inverse φ 1 (θ) := inf{z : φ(z) > θ} is well-defined for θ. Proposition 4.5 For θ > let t u t (θ) be the unique locally bounded positive solution to t u t(θ) = θ φ(u t (θ)), u =. (4.15) Then lim t u t (θ) = if φ ( ), and lim t u t (θ) = φ 1 (θ) if φ ( ) >. 29
34 Proof. By Proposition 2.9 we have Q x (x(t) > ) > for every t. By Corollary 4.4, { t } Q x exp θ x(s)ds = exp{ xu t (θ)}. (4.16) Then t u t (θ) is strictly increasing, so ( / t)u t (θ) > for all θ. Let u (θ) = lim t u t (θ) [, ]. In the case φ ( ), we clearly have φ(z) for all z. Then ( / t)u t (θ) > θ and u (θ) =. In the case φ ( ) >, we note φ(u t (θ)) = θ ( / t)u t (θ) < θ, and hence u t (θ) < φ 1 (θ), implying u (θ) φ 1 (θ) <. It follows that = lim t t u t(θ) = θ lim φ(u t (θ)) = θ φ(u (θ)). t Then we have u (θ) = φ 1 (θ). Theorem 4.6 Let X = (Ω, F, F t, x(t), Q x ) be a Hunt realization of the CBprocess started from time zero. If φ ( ), for x > we have { } Q x x(s)ds = = 1. If φ ( ) >, for x > and θ > we have { } Q x exp θ x(s)ds = exp{ xφ 1 (θ)}. Proof. In view of (4.16), this follows easily from Proposition
35 5 Construction of CBI-processes Let {p(j) : j N} and {q(j) : j N} be probability distributions on N = {, 1, 2,...} with generating functions g and h, respectively. Suppose that {ξ n,i : n, i = 1, 2,...} is a family of N-valued i.i.d. random variables with distribution {p(j) : j N} and {η n : n = 1, 2,...} is a family of N-valued i.i.d. random variables with distribution {q(j) : j N}. We assume the two families are independent of each other. Given an N-valued random variable y() independent of {ξ n,i } and {η n }, we define inductively y(n) = y(n 1) k=1 ξ n,k + η n, n = 1, 2,.... (5.1) Let P (i, j) = (p i q)(j) for i, j N. For any n 1 and {i,, i n 1, j} N it is easy to see that P ( y(n) = j y(n 1) = in 1 ) = P ( y(n) = j y() = i,, y(n 1) = i n 1 ) ( i n 1 = P k=1 ) ξ n,k + η n = j = P (i n 1, j). Then {y(n) : n } is a Markov chain with one-step transition matrix P = (P (i, j) : i, j N). The random variable y(n) can be thought of as the number of individuals in generation n of an evolving particle system with immigration. After one unit time, each of the y(n) particles splits independently of others into a random number of offspring according to the distribution {p(j) : j N} and a random number of immigrants are added to the system according to the distribution {q(j) : j N}. Clearly, we have P (i, j)z j = g(z) i h(z), z 1. (5.2) j= We call any N-valued Markov chain with transition probabilities given by (5.2) a Galton Watson branching process with immigration (GWI-process) with parameters (g, h). When h 1, this reduces to the GW-process defined before. By a general result in the theory of Markov chains, for any n 1 the n-step transition matrix of the GWI-process is just the n-fold product P n = (P n (i, j) : i, j N). 31
36 Proposition 5.1 For any n 1 and i N we have n P n (i, j)z j = g n (z) i h(g (j 1) (z)), z 1. (5.3) j= j=1 Proof. From (5.2) we know (5.3) holds for n = 1. Now suppose that (5.3) holds for some n 1. We have P n+1 (i, j)z j = P (i, k)p n (k, j)z j j= = j= k= P (i, k)g n (z) k k= = g(g n (z)) i h(z) n+1 = g (n+1) (z) i j=1 n j=1 h(g j 1 (z)) n h(g j 1 (z)) j=1 h(g j 1 (z)). Then (5.3) also holds when n is replaced by n + 1. That proves the result by induction. Suppose that for each integer k 1 we have a GWI-process {y k (n) : n } with parameters (g k, h k ). Let z k (n) = y k (n)/k. Then {z k (n) : n } is a Markov chain with state space E k := {, 1/k, 2/k,...} and n-step transition probability Pk n (x, dy) determined by E k e λy P n k (x, dy) = g n k (e λ/k ) kx n j=1 h k (g (j 1) k (e λ/k )). (5.4) Suppose that {γ k } is a positive real sequence so that γ k increasingly as k. Let γ k t denote the integer part of γ k t. In view of (5.4), given z k () = x the conditional distribution P γ kt k (x, ) of ξ k (t) := z k ( γ k t ) on E k is determined by e λy P γ kt k (x, dy) E k γ k t = g γ kt k (e λ/k ) kx h k (g j 1 k (e λ/k )) j=1 32
37 = exp { xk log g γ kt k (e λ/k ) } { γ kt exp { = exp xv k (t, λ) γ 1 k γ kt j=1 } log h k (g j 1 k (e λ/k )) } H k (v k (s, λ))ds, (5.5) where v k (t, λ) is given by (2.8) (i.e., v k (t, λ) = k log g γ kt k (e λ/k )) and H k (λ) = γ k log h k (e λ/k ), λ. (5.6) Condition 5.2 There is a function ψ on [, ) such that H k (z) ψ(z) uniformly on [, a] for every a as k. The following results can be proved by arguments similar to those in Section 2; see also Li (211): Proposition 5.3 Suppose that Condition 5.2 is satisfied. Then the function limit ψ has representation ( ψ(z) = βz + ) 1 e zu ν(du), z, (5.7) (, ) where β is a constant and ν(du) is a σ-finite measure on (, ) satisfying (1 u)ν(du) <. (, ) Proof. It is well-known that ψ has representation (5.7) if and only if e ψ = L µ is the Laplace transform of an infinitely divisible probability distribution µ on [, ). For any z we have where H k (z) = γ k log [ 1 γ 1 k H k(z) ], H k (z) = γ k [1 h k (e z/k )]. (5.8) Then H k can be represented in the form (5.7), and so e H k = L µk is the Laplace transform of an infinitely divisible distribution µ k on [, ). From (5.8) and Condition 5.2 it follows that H k (z) ψ(z) uniformly on [, a] for every a as k. That yields the existence of a probability distribution µ on [, ) so that µ = lim k µ k weakly and e ψ = L µ. Clearly, µ is infinitely divisible, so ψ has representation (5.7). 33
38 Proposition 5.4 For any function ψ with representation (5.7) there is a sequence { H k } in the form of (5.6) satisfying Condition 5.2. Proof. (Exercise.) Theorem 5.5 Suppose that φ and ψ are given by (2.13) and (5.7), respectively. For any λ let t v t (λ) be the unique locally bounded positive solution to (2.16). Then there is a Feller transition semigroup (P t ) t on [, ) defined by { t } e λy P t (x, dy) = exp xv t (λ) ψ(v s (λ))ds. (5.9) [, ) If a Markov process has transition semigroup (P t ) t defined by (5.9), we call it a continuous-state branching process with immigration (CBI-process) with branching mechanism φ and immigration mechanism ψ. In particular, if uν(du) <, (, ) one can differentiate both sides of (5.9) and use (3.4) to see t yp t (x, dy) = xe bt + ψ () e bs ds, (5.1) where [, ) ψ () = β + (, ) uν(du). (5.11) Proposition 5.6 Suppose that {(y 1 (t), Gt 1) : t } and {(y 2(t), Gt 2) : t } are two independent CBI-processes with branching mechanism φ and immigration mechanisms ψ 1 and ψ 2, respectively. Let y(t) = y 1 (t) + y 2 (t) and G t = σ(gt 1 Gt 2). Then {(y(t), G t) : t } is a CBI-processes with branching mechanism φ and immigration mechanism ψ = ψ 1 + ψ 2. Proof. Let t r and for i = 1, 2 let F i be a bounded positive Gr i-measurable random variable. For any λ we have P [ F 1 F 2 e λy(t)] = P [ F 1 e ] λy 1(t) P [ F 2 e ] λy 2(t) [ { = P F 1 exp y 1 (r)v t r (λ) 34 t r }] ψ 1 (v s (λ))ds
39 [ { P F 2 exp y 2 (r)v t r (λ) [ { = P F 1 F 2 exp y(r)v t r (λ) t r t r }] ψ 2 (v s (λ))ds }] ψ(v s (λ))ds. Then {(y(t), F t ) : t } is a CBI-processes with branching mechanism φ and immigration mechanism ψ. The next theorem follows by a modification of the proof of Theorem Theorem 5.7 Suppose that Conditions 2.3 and 5.2 are satisfied. Let {y(t) : t } be a CBI-process with transition semigroup (P t ) t defined by (5.9). If ξ k () = z k () converges to y() in distribution, then {ξ k (t) : t } = {z k ( γ k t ) : t } converges to {y(t) : t } in distribution on D[, ). The convergence of rescaled GWI-processes to CBI-processes have been studied by a number of authors; see, e.g., Aliev (1985), Kawazu and Watanabe (1971) and Li (26, 211) among many others. The deep structures of the CBI-processes were investigated in Pitman and Yor (1982) and Shiga and Watanabe (1973). Example 5.1 The transition semigroup (Q b t ) t defined by (3.14) corresponds to a CBI-process with branching mechanism φ and immigration mechanism φ. Example 5.2 Suppose that c >, < α 1 and b are constants and let φ(z) = cz 1+α + bz for z. In this case the cumulant semigroup (v t ) t is given by (3.23). Let β and let ψ(z) = βz α for z. We can use (5.9) to define the transition semigroup (P t ) t. It is easy to show that e λy 1 P t (x, dy) = [ 1 + cq b α (t)λ α] β/cα e xv t(λ), λ. (5.12) [, ) 35
40 6 Martingale problem formulations In this section we give several formulations of the CBI-process in terms of martingale problems. Throughout the section, we assume (φ, ψ) are given by (2.13) and (5.7), respectively, with ψ () := β + uν(du) <. (6.1) (, ) Let C 2 [, ) denote the set of bounded continuous real functions on [, ) with bounded continuous derivatives up to the second order. For f C 2 [, ) define [ Lf(x) = cxf (x) + x f(x + z) f(x) zf (x) ] m(dz) (, ) [ ] + (β bx)f (x) + f(x + z) f(x) ν(dz). (6.2) (, ) We shall identify the operator L as the generator of the CBI-process. Proposition 6.1 Let (P t ) t be the transition semigroup defined by (2.21) and (5.9). Then for any t and λ we have [, ) e λy P t (x, dy) = e xλ + t ds [, ) [yφ(λ) ψ(λ)]e yλ P s (x, dy). (6.3) Proof. Recall that v t (λ) = ( / λ)v t(λ). By differentiating both sides of (5.9) we get [, ) ye yλ P t (x, dy) = [, ) From this and (3.7) it follows that t [, ) t e yλ P t (x, dy) [xv t (λ) + ψ (v s (λ))v s ]. (λ)ds [ e yλ P t (x, dy) = x ] t v t(λ) + ψ(v t (λ)) e yλ P t (x, dy) [, ) [ ] = xφ(λ)v t (λ) ψ(λ) e yλ P t (x, dy) [, ) t ψ (v s (λ)) s v s(λ)ds e yλ P t (x, dy) [, ) [ ] = xφ(λ)v t (λ) ψ(λ) e yλ P t (x, dy) [, ) t + φ(λ) ψ (v s (λ))v s (λ)ds e yλ P t (x, dy) 36 [, )
Branching Processes II: Convergence of critical branching to Feller s CSB
Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied
More informationAsymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process
Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process Mátyás Barczy University of Debrecen, Hungary The 3 rd Workshop on Branching Processes and Related
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationTHE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON
GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationOn the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem
On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family
More informationLearning Session on Genealogies of Interacting Particle Systems
Learning Session on Genealogies of Interacting Particle Systems A.Depperschmidt, A.Greven University Erlangen-Nuremberg Singapore, 31 July - 4 August 2017 Tree-valued Markov processes Contents 1 Introduction
More information9.2 Branching random walk and branching Brownian motions
168 CHAPTER 9. SPATIALLY STRUCTURED MODELS 9.2 Branching random walk and branching Brownian motions Branching random walks and branching diffusions have a long history. A general theory of branching Markov
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationConvergence of Feller Processes
Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationMean-field dual of cooperative reproduction
The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time
More informationON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction
ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators
More informationSome SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen
Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text
More informationWe denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More informationExponential functionals of Lévy processes
Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationBRANCHING PROCESSES 1. GALTON-WATSON PROCESSES
BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented
More informationAsymptotic behaviour near extinction of continuous state branching processes
Asymptotic behaviour near extinction of continuous state branching processes G. Berzunza and J.C. Pardo August 2, 203 Abstract In this note, we study the asymptotic behaviour near extinction of sub- critical
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More informationConvergence of Markov Processes. Amanda Turner University of Cambridge
Convergence of Markov Processes Amanda Turner University of Cambridge 1 Contents 1 Introduction 2 2 The Space D E [, 3 2.1 The Skorohod Topology................................ 3 3 Convergence of Probability
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationSUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES
SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,
More informationMartingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi
s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November
More informationLimit theorems for continuous time branching flows 1
(Version: 212/4/11) arxiv:124.2755v1 [math.pr 12 Apr 212 Limit theorems for continuous time branching flows 1 Hui He and Rugang Ma 2 Beijing ormal University Abstract: We construct a flow of continuous
More informationExercises. T 2T. e ita φ(t)dt.
Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.
More informationUPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES
Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationHomework #6 : final examination Due on March 22nd : individual work
Université de ennes Année 28-29 Master 2ème Mathématiques Modèles stochastiques continus ou à sauts Homework #6 : final examination Due on March 22nd : individual work Exercise Warm-up : behaviour of characteristic
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationAsymptotic Behavior of a Controlled Branching Process with Continuous State Space
Asymptotic Behavior of a Controlled Branching Process with Continuous State Space I. Rahimov Department of Mathematical Sciences, KFUPM, Dhahran, Saudi Arabia ABSTRACT In the paper a modification of the
More informationA NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM
J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract
More informationHomework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples
Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function
More informationSequences and Series of Functions
Chapter 13 Sequences and Series of Functions These notes are based on the notes A Teacher s Guide to Calculus by Dr. Louis Talman. The treatment of power series that we find in most of today s elementary
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationStochastic Differential Equations.
Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)
More informationYaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego
Yaglom-type limit theorems for branching Brownian motion with absorption by Jason Schweinsberg University of California San Diego (with Julien Berestycki, Nathanaël Berestycki, Pascal Maillard) Outline
More informationprocess on the hierarchical group
Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationMATH 220 solution to homework 4
MATH 22 solution to homework 4 Problem. Define v(t, x) = u(t, x + bt), then v t (t, x) a(x + u bt) 2 (t, x) =, t >, x R, x2 v(, x) = f(x). It suffices to show that v(t, x) F = max y R f(y). We consider
More informationCONTINUOUS-STATE BRANCHING PROCESSES AND SELF-SIMILARITY
Applied Probability Trust 2 October 28 CONTINUOUS-STATE BRANCHING PROCESSES AND SELF-SIMILARITY A. E. KYPRIANOU, The University of Bath J.C. PARDO, The University of Bath Abstract In this paper we study
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationContinuous-state branching processes, extremal processes and super-individuals
Continuous-state branching processes, extremal processes and super-individuals Clément Foucart Université Paris 13 with Chunhua Ma Nankai University Workshop Berlin-Paris Berlin 02/11/2016 Introduction
More informationSYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction
Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationErdős-Renyi random graphs basics
Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between
More informationJump Processes. Richard F. Bass
Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationCIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc
CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes
More informationFeller Processes and Semigroups
Stat25B: Probability Theory (Spring 23) Lecture: 27 Feller Processes and Semigroups Lecturer: Rui Dong Scribe: Rui Dong ruidong@stat.berkeley.edu For convenience, we can have a look at the list of materials
More informationStochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik
Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2
More informationTwo viewpoints on measure valued processes
Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.
More information1 Independent increments
Tel Aviv University, 2008 Brownian motion 1 1 Independent increments 1a Three convolution semigroups........... 1 1b Independent increments.............. 2 1c Continuous time................... 3 1d Bad
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationApplications of Ito s Formula
CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale
More informationAn essay on the general theory of stochastic processes
Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationA Dynamic Contagion Process with Applications to Finance & Insurance
A Dynamic Contagion Process with Applications to Finance & Insurance Angelos Dassios Department of Statistics London School of Economics Angelos Dassios, Hongbiao Zhao (LSE) A Dynamic Contagion Process
More information1. Stochastic Processes and filtrations
1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationB. Appendix B. Topological vector spaces
B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function
More informationPotential theory of subordinate killed Brownian motions
Potential theory of subordinate killed Brownian motions Renming Song University of Illinois AMS meeting, Indiana University, April 2, 2017 References This talk is based on the following paper with Panki
More informationOn the distributional divergence of vector fields vanishing at infinity
Proceedings of the Royal Society of Edinburgh, 141A, 65 76, 2011 On the distributional divergence of vector fields vanishing at infinity Thierry De Pauw Institut de Recherches en Mathématiques et Physique,
More informationAn introduction to Mathematical Theory of Control
An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018
More informationEXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE. Leszek Gasiński
DISCRETE AND CONTINUOUS Website: www.aimsciences.org DYNAMICAL SYSTEMS SUPPLEMENT 2007 pp. 409 418 EXISTENCE RESULTS FOR QUASILINEAR HEMIVARIATIONAL INEQUALITIES AT RESONANCE Leszek Gasiński Jagiellonian
More informationn [ F (b j ) F (a j ) ], n j=1(a j, b j ] E (4.1)
1.4. CONSTRUCTION OF LEBESGUE-STIELTJES MEASURES In this section we shall put to use the Carathéodory-Hahn theory, in order to construct measures with certain desirable properties first on the real line
More informationWeak Subordination for Convex Univalent Harmonic Functions
Weak Subordination for Convex Univalent Harmonic Functions Stacey Muir Abstract For two complex-valued harmonic functions f and F defined in the open unit disk with f() = F () =, we say f is weakly subordinate
More informationfor all f satisfying E[ f(x) ] <.
. Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if
More informationWeak solutions of mean-field stochastic differential equations
Weak solutions of mean-field stochastic differential equations Juan Li School of Mathematics and Statistics, Shandong University (Weihai), Weihai 26429, China. Email: juanli@sdu.edu.cn Based on joint works
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationµ X (A) = P ( X 1 (A) )
1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration
More informationExercises in stochastic analysis
Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with
More informationLeast Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises
Least Squares Estimators for Stochastic Differential Equations Driven by Small Lévy Noises Hongwei Long* Department of Mathematical Sciences, Florida Atlantic University, Boca Raton Florida 33431-991,
More informationStochastic Integration.
Chapter Stochastic Integration..1 Brownian Motion as a Martingale P is the Wiener measure on (Ω, B) where Ω = C, T B is the Borel σ-field on Ω. In addition we denote by B t the σ-field generated by x(s)
More informationCorrection to: Yield curve shapes and the asymptotic short rate distribution in affine one-factor models
Finance Stoch (218) 22:53 51 https://doi.org/1.17/s78-18-359-5 CORRECTION Correction to: Yield curve shapes and the asymptotic short rate distribution in affine one-factor models Martin Keller-Ressel 1
More informationSome basic elements of Probability Theory
Chapter I Some basic elements of Probability Theory 1 Terminology (and elementary observations Probability theory and the material covered in a basic Real Variables course have much in common. However
More informationp 1 ( Y p dp) 1/p ( X p dp) 1 1 p
Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationBOUNDEDNESS AND EXPONENTIAL STABILITY OF SOLUTIONS TO DYNAMIC EQUATIONS ON TIME SCALES
Electronic Journal of Differential Equations, Vol. 20062006, No. 12, pp. 1 14. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu login: ftp BOUNDEDNESS
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationStochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the
More informationThe Skorokhod problem in a time-dependent interval
The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem
More informationHAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM
Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM
More informationPreliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012
Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of
More informationChapter 6. Markov processes. 6.1 Introduction
Chapter 6 Markov processes 6.1 Introduction It is not uncommon for a Markov process to be defined as a sextuple (, F, F t,x t, t, P x ), and for additional notation (e.g.,,, S,P t,r,etc.) tobe introduced
More informationBessel Functions Michael Taylor. Lecture Notes for Math 524
Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and
More information