CONTINUOUS STATE BRANCHING PROCESSES

Similar documents
9.2 Branching random walk and branching Brownian motions

Elementary Applications of Probability Theory

TAUBERIAN THEOREM FOR HARMONIC MEAN OF STIELTJES TRANSFORMS AND ITS APPLICATIONS TO LINEAR DIFFUSIONS

Adventures in Stochastic Processes

LIST OF MATHEMATICAL PAPERS

Diffusion Approximation of Branching Processes

Asymptotic behaviour near extinction of continuous state branching processes

SIMILAR MARKOV CHAINS

GAUSSIAN PROCESSES GROWTH RATE OF CERTAIN. successfully employed in dealing with certain Gaussian processes not possessing

RESEARCH ANNOUNCEMENTS

ITÔ S ONE POINT EXTENSIONS OF MARKOV PROCESSES. Masatoshi Fukushima

Introduction to Random Diffusions

ON THE FRACTIONAL CAUCHY PROBLEM ASSOCIATED WITH A FELLER SEMIGROUP

{σ x >t}p x. (σ x >t)=e at.

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

P(I -ni < an for all n > in) = 1 - Pm# 1

On Reflecting Brownian Motion with Drift

I forgot to mention last time: in the Ito formula for two standard processes, putting

Brownian Motion and Stochastic Calculus

Markov Chains and Stochastic Sampling

EULER MARUYAMA APPROXIMATION FOR SDES WITH JUMPS AND NON-LIPSCHITZ COEFFICIENTS

Memoirs of My Research on Stochastic Analysis

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego

Weak Variation of Gaussian Processes

Potentials of stable processes

LIMIT THEOREMS FOR SHIFT SELFSIMILAR ADDITIVE RANDOM SEQUENCES

Macroscopic quasi-stationary distribution and microscopic particle systems

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Moran Process as a Markov Chain on Leaf-labeled Trees

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

WHITE NOISE APPROACH TO FEYNMAN INTEGRALS. Takeyuki Hida

AN EXTENSION OF THE HONG-PARK VERSION OF THE CHOW-ROBBINS THEOREM ON SUMS OF NONINTEGRABLE RANDOM VARIABLES

Spectral Representation for Branching Processes on the Real Half Line

Kolmogorov Equations and Markov Processes

The absolute continuity relationship: a fi» fi =exp fif t (X t a t) X s ds W a j Ft () is well-known (see, e.g. Yor [4], Chapter ). It also holds with

Asymptotic Behavior of a Controlled Branching Process with Continuous State Space

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

9 Brownian Motion: Construction

Let's transfer our results for conditional probability for events into conditional probabilities for random variables.

arxiv: v1 [math.pr] 20 May 2018

4 Sums of Independent Random Variables

Systems Driven by Alpha-Stable Noises

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

A Short Introduction to Diffusion Processes and Ito Calculus

Jump-type Levy Processes

Squared Bessel Process with Delay

Likelihood Functions for Stochastic Signals in White Noise* TYRONE E. DUNCAN

Approximating diffusions by piecewise constant parameters

Lecture 17 Brownian motion as a Markov process

Finite-time Blowup of Semilinear PDEs via the Feynman-Kac Representation. CENTRO DE INVESTIGACIÓN EN MATEMÁTICAS GUANAJUATO, MEXICO

SMSTC (2007/08) Probability.

A Note on the Central Limit Theorem for a Class of Linear Systems 1

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

8.2. strong Markov property and reflection principle. These are concepts that you can use to compute probabilities for Brownian motion.

On the exact bit error probability for Viterbi decoding of convolutional codes

Citation Osaka Journal of Mathematics. 41(4)

Feller Processes and Semigroups

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

WITH SWITCHING ARMS EXACT SOLUTION OF THE BELLMAN EQUATION FOR A / -DISCOUNTED REWARD IN A TWO-ARMED BANDIT

Some Tools From Stochastic Analysis

Statistical inference on Lévy processes

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Mean-field dual of cooperative reproduction

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

Lecture 4: Introduction to stochastic processes and stochastic calculus

THE STRUCTURE OF A MARKOV CHAIN

Nested Uncertain Differential Equations and Its Application to Multi-factor Term Structure Model

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Set-Indexed Processes with Independent Increments

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Stochastic Differential Equations.

Intertwining of Markov processes

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

The Azéma-Yor Embedding in Non-Singular Diffusions

SPACE AVERAGES AND HOMOGENEOUS FLUID FLOWS GEORGE ANDROULAKIS AND STAMATIS DOSTOGLOU

PROCEEDINGS of the FIFTH BERKELEY SYMPOSIUM ON MATHEMATICAL STATISTICS AND PROBABILITY

ON THE MAXIMUM OF A NORMAL STATIONARY STOCHASTIC PROCESS 1 BY HARALD CRAMER. Communicated by W. Feller, May 1, 1962

Non-homogeneous random walks on a semi-infinite strip

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy.

G(t) := i. G(t) = 1 + e λut (1) u=2

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

Risk Bounds for Lévy Processes in the PAC-Learning Framework

On the quantiles of the Brownian motion and their hitting times.

Theory of Stochastic Processes 3. Generating functions and their applications

The Codimension of the Zeros of a Stable Process in Random Scenery

STAT 331. Martingale Central Limit Theorem and Related Results

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Vector measures of bounded γ-variation and stochastic integrals

FAST AND HETEROCLINIC SOLUTIONS FOR A SECOND ORDER ODE

FE 5204 Stochastic Differential Equations

Time Quantization and q-deformations

Some Aspects of Universal Portfolio

Transcription:

CONTINUOUS STATE BRANCHING PROCESSES BY JOHN LAMPERTI 1 Communicated by Henry McKean, January 20, 1967 1. Introduction. A class of Markov processes having properties resembling those of ordinary branching processes, but with continuous instead of discrete states, was introduced by M. Jirina in [3]. Recently it has developed that these processes (assuming continuous time parameter, stationary transition probabilities, and one-dimensional state space) form precisely the class of possible limiting processes for a sequence of Gal ton-watson, or simple branching, processes which have their time and space units expanding at suitable rates to infinity [5]. The purpose of this announcement is to describe the construction of the most general process of the above sort. It turns out that every such process can be obtained by a random time change from a process with stationary independent increments which cannot jump to the left. We will state these results precisely in 3 below, and discuss some details and examples in 4. Proofs of the main results will appear elsewhere. 2. Definitions and examples. The exact class of processes we will consider has been defined in [5], where some elementary properties and examples are also given. We repeat here only the essentials. DEFINITION. A 'C.B. function' is a Markov transition function on the Borel sets of [0, oo) with P t (x, [0, oo)) = l and such that P t (x, E) is jointly measurable in t and x for each JE, is nontrivial in the sense that P t {x, {O})<1 for some t>0 f #>0, and that the 'branching property' (i) P.(* + y, ) = **(*, -)*P.(y, ) is satisfied for all /, x, y^0 f where * denotes convolution. DEFINITION. A 'C.B. process' is a Markov process on [0, oo) with right-continuous paths whose transition probabilities are given by a C.B. function. In [5] it is shown that a C.B. function must be stochastically continuous and map the space of continuous functions on [0, oo ] into itself; it is a consequence that every C.B. process is automatically strong Markov. We will work with the spatial Laplace transforms of C.B. functions, which can be written in the form 1 This research was partially supported by the NSF. 382

CONTINUOUS STATE BRANCHING PROCESSES 383 /» oo exp(-x;y)p,0, dy) = exp(-#*(x)), X^O. o The Chapman-Kolmogorov equation is then equivalent to the functional equation (3) h + s(\) = MfcOO). These facts are easy consequences of (1). Perhaps the most important examples, and certainly the best known ones, are given by the formulae lh(x) - X exp(«0[l - (j8x/2a)(l - expfc/))]- 1, if a 7* 0, = X[l + GMX/2)]- 1, ifa-0. (See [2].) The transition functions with transforms (4) correspond to diffusion processes having the (backward) Kolmogorov equations (5) d<j>/dt = ax(d<t>/d%) + (I3x/2)(d 2 (t>/dx 2 ). A somewhat more general class of processes have transforms given by MX) = X exp(a*/*)[l - C8X*/2a)(l - exp(a*) ]-*/», if a * 0, = X[l + (tf/2)* p ]- llp, if a = 0, where 0< ^gl. The case a = 0 is especially important, for then (6) determines the totality of limiting processes which can arise from a single Galton-Watson process as the units and the initial population size tend to infinity [4], We will return to these examples in 4. 3. The main theorems. The examples (5), as well as the C.B. processes with step-function paths constructed by Jirina, suggest that a C.B. function would be translation invariant, at least away from the absorbing state 0, if it were not for the fact that the 'local speed' of the process at x is not constant but proportional to x. It is therefore quite plausible to attempt removing this factor by means of a random time change. Let {x t } be any C.B. process, and define J(co)«sup{f: x t (o))>0}. The functional (7) 4>T(U>) = I x u (œ)du J 0 is then strictly increasing as long as T<J; let T(t)(=T(t, co)) be its inverse function, defined for each t such that» 00 ƒ Xu(u)du. 0

384 JOHN LAMPERTI [May Finally, we define a new random process {y t } by means of Vt = oc T a) iit<k, = 0 otherwise. From the general theory due to Volkonski (as presented in slightly more generality by Dynkin [l, Chapter 10]), it follows that {y t } is also a right-continuous strong Markov process with stationary transition probabilities. THEOREM 1. The process {y t } defined above is a process with stationary independent increments, unable to jump to the left, which has been stopped upon the instant of first reaching state 0. COROLLARY 1 (JIRINA). A C.B. process of the type has increasing paths. purely-discontinuous COROLLARY 2. If a C.B. process has continuous paths, it must be one of those satisfying (5). PROOF OF COROLLARY 2. The effect of the time change is to multiply the generator of {x t } by x" 1 for x>0, and so the processes satisfying (5) go over into Brownian motion with a drift. Since there are no other additive processes with continuous paths and the correspondence is unique, there can be no additional C.B. diffusions beyond those of (5). Turning from analysis to construction, we now assume that {y t } is a process with independent increments, unable to jump to the left, which has been stopped when (if) it reached 0 (^0 >0). We further assume that /,0 00 du - = oo a.s. 0 o Vu y u The only way (10) can fail occurs when y t ><*> 'too fast'; it is easy to see that a sufficient condition for (10) is that the increments of the (unstopped) additive process have finite expectations. This time we set /'(co) = {sup t:y t (o))>0}, and for T<J' define the additive functional du (11) fr(«) - I - Jo yu This function is strictly increasing (as long as T<J') and its inverse will again be denoted T{t). A new Markov process is then defined by C T

1967] CONTINUOUS STATE BRANCHING PROCESSES 385 *«(«) = y TW if T(f) < J', = 0 otherwise. THEOREM 2. The process {x t } constructed above is a C.B. process. It is easy to see that the two transformations we have described are inverse to each other. Therefore a one-to-one correspondence has been established between C.B. processes and additive ones without negative jumps satisfying (10). Since the latter class can all be represented through the Levy-Khintchine formula for their characteristic functions, we may now claim to have given a construction of all (onedimensional) C.B. processes. 4. Corresponding processes. In this section we shall examine some examples of the correspondence between {x t } and {yt}> Let {y t \ be any suitable additive process, and suppose y 0 is a 'large' state x. During the initial time segment [0, t], the ratio yt/yo remains close to 1 with high probability. During this time, therefore, the corresponding C.B. process {x e } behaves as if its transition function were the same as that of {y t } except for a change of time scale by the fixed factors; the interval [0, t/x] for {x t } corresponds to [0, t] for {yt}> These considerations strongly suggest the truth of the following proposition: If P t and Q t are the transition f unctions of a C.B. process and the corresponding additive process, then (13) Q t (0, E) = lim P t/,(x, E + x). We shall apply (13) to the examples mentioned in 2. First it is convenient to recast it by taking Laplace transforms: 00 ƒ exp( \y)q t (0, dy) = lim exp( ^«/ x (X) + \x) = exp( xtc(\)), where c(\) = dip^(x)/'dt\ * 0 - Applying (14) to the C.B. diffusions whose \p functions are given in (4), we obtain ƒ-oo 00 exp(-\y)qt(pi dy) = exp(-otf\ + pt\ 2 /2 t ), which is the moment generating function of a shifted normal law. Thus (as expected) {y t } in this case is Brownian motion with a drift. Treating (6) the same way, we find that the corresponding additive processes satisfy

386 JOHN LAMPERTI (16) I exp(~x;y)<2*(0, dy) = expc-c^x + P(2p)~H\*+ l ), X > 0. These functions are the transforms of stable laws of order p + 1, with a drift term if a -^0. The formula is momentarily deceptive in that X is real rather than imaginary; in fact, these laws are the ones whose canonical (Levy-Khintchine) measure has no mass on ( o, 0). In particular, the C.B. processes given by (6) with OJ = 0, which are the possible limits of a single Galton-Watson process in the manner studied in [4], are obtained from a drift-f ree, maximally unsymmetric stable process via the time change determined by (11). We will return to these examples and give further applications to the limiting theory of branching processes in a future publication. REFERENCES 1. E. B. Dynkin, Markov processes. I, Springer, Berlin, 1965. 2. W. Feller, Diffusion processes in genetics, Proc. Second Berkeley Symposium, pp. 227-246, Univ. California Press, Berkeley, Calif., 1951. 3. M. Jirina, Stochastic branching processes with continuous state space, Czech. Math. J. 8 (1958), pp. 292-312. 4. J. W. Lamperti, Limiting distributions for branching processes, Proc. Fifth Berkeley Symposium (to appear). 5., The limit of a sequence of branching processes, Z. Wahrscheinlichkeitsrechnung und Verw. Gebiete. DARTMOUTH COLLEGE AND UNIVERSITY OF CAMBRIDGE