epl draft Full characterization of the fractional Poisson process Mauro Politi,3, Taisei Kaizoji and Enrico Scalas 2,3 SSRI & Department of Economics and Business, International Christian University, 3--2 Osawa, Mitaka, Tokyo, 8-8585 Japan. 2 Dipartimento di Scienze e Tecnologie Avanzate,Università del Piemonte Orientale Amedeo Avogadro, Viale T. Michel, 52 Alessandria, Italy. 3 Basque Center for Applied Mathematics, Bizkaia Technology Park, Building 5, E486, Derio, Spain PACS PACS 2.5.Ey Stochastic processes 5..Ln Monte Carlo methods Abstract The fractional Poisson process (FPP) is a counting process with independent and identically distributed inter-event times following the Mittag-Leffler distribution. This process is very useful in several fields of applied and theoretical physics including models for anomalous diffusion. Contrary to the well-known Poisson process, the fractional Poisson process does not have stationary and independent increments. It is not a Lévy process and it is not a Markov process. In this letter, we present formulae for its finite-dimensional distribution functions, fully characterizing the process. These exact analytical results are compared to Monte Carlo simulations. From a loose mathematical point of view, counting processes N(A) are stochastic processes that count the random number of points in a set A. They are used in many fields of physics and other applied sciences. In this letter, we will consider one-dimensional real sets with the physical meaning of time intervals. The points will be incoming events whose duration is much smaller than the inter-event or inter-arrival waiting time. For instance, counts from a Geiger-Müller counter can be described in this way. The number of counts, N( t), in a given time interval t is known to follow the Poisson distribution P(N( t) = n) = exp( λ t) (λ t)n, () n! where λ is the constant rate of arrival of ionizing particles. Together with the assumption of independent and stationary increments, Eq. () is sufficient to ine the homogeneous Poisson process. Curiously, one of the first occurrences of this process in the scientific literature was connected to the number of casualties by horse kicks in the Prussian army cavalry []. The Poisson process is strictly related to the exponential distribution. The inter-arrival times τ i identically follow the exponential distribution and are independent random variables. This means that the Poisson process is a prototypical renewal process. A justification for the ubiquity of the Poisson process has to do with its relationship with the binomial distribution. Suppose that the time interval of interest (t, t + t) is divided into n equally spaced sub-intervals. Further assume that a counting event appears in such a sub-interval with probability p and does not appear with probability p. Then, P(N( t) = k) = Bin(k; p, n) is a binomial distribution of parameters p and n and the expected number of events in the time interval is given by E[N( t)] = np. If this expected number is kept constant for n, the binomial distribution converges to the Poisson distribution of parameter λ = E[N( t)]/ t, while, in the meantime, p. However, it can be shown that many counting processes with non-stationary increments converge to the Poisson process after a transient period. It is sufficient to require that they are renewal process (i.e. they have independent and identically distributed (iid) inter-arrival times) and that E(τ i ) <. In other words, many counting processes with non-independent and non-stationary increments behave as the Poisson process if observed long after the transient period. In recent times, it has been shown that heavy-tailed distributed inter-arrival times (for which E(τ i ) = ) do play a role in many phenomena such as blinking nano-dots [2, 3], human dynamics [4, 5] and the related inter-trade times in financial markets [6, 7]. Among the counting processes with non-stationary increments, the so-called fractional Poisson process [8], N β (t), is particularly important because it is the thinning limit of counting processes related to renewal prop-
Mauro Politi,3 Taisei Kaizoji Enrico Scalas2,3 cesses with power-law distributed inter-arrival times [9, ]. Moreover, it can be used to approximate anomalous diffusion ruled by space-time fractional diffusion equations [9, 6]. It is a straightforward generalization of the Poisson process ined as follows. Let {τ i } i be a sequence of independent and identically distributed positive random variables with the meaning of inter-arrival times and let their common cumulative distribution function (cdf) be F τ (t) = P(τ t) = E β ( t β ), (2) where E β ( t β ) is the one-parameter Mittag-Leffler function, E β (z), ined in the complex plane as E β (z) = n= z n Γ(nβ + ) evaluated in the point z = t β and with the prescription < β. In equation (3), Γ( ) is Euler s Gamma function. The sequence of the epochs, {T n } n, is given by the sums of the inter-arrival times n T n = τ i. (4) i (3) The epochs represent the times in which events arrive or occur. Let f τ (t) = df τ (t)/dt denote the probability density function (pdf) of the inter-arrival times, then the probability density function of the n-th epoch is simply given by the n-fold convolution of f τ (t), written as fτ n (t). In Ref. [], it is shown that f Tn (t) = fτ n (t) = β tnβ (n )! E(n) β ( tβ ), (5) where E (n) β ( tβ ) is the n-th derivative of E β (z) evaluated in z = t β. The counting process N β (t) counts the number of epochs (events) up to time t, assuming that T = is an epoch as well, or, in other words, that the process begins from a renewal point. This assumption will be used all over this paper. N β (t) is given by N β (t) = max{n : T n t}. (6) In Ref. [9], the fractional Poisson distribution is derived and it is given by P(N β (t) = n) = tβn n! E(n) β ( tβ ). (7) Eq. (7) coincides with the Poisson distribution of parameter λ = for β =. In principle, equations (3) and (7) can be directly used to derive the fractional Poisson distribution, but convergence of the series is slow. Fortunately, in a recent paper, Beghin and Orsingher proved that E (n) β ( tβ ) = n! [ exp( u)u n t βn F Sβ (t; u) (n )! ] exp( u)un du, n! (8) P(N β (t )=n ).25.2.5..5 β=.5, t β=.8, t 2 3 4 5 6 7 8 9 n Figure : P (N β (T ) = n ) as function of n for three different values of β. The crosses are estimations obtained from 5 Monte Carlo samples and the lines are given to guide the eye. where F Sβ (t; u) is the cdf of a stable random variable S β (ν, γ, δ) with index β, skewness parameter ν =, scale parameter γ = (u cos πβ/2) /β and location δ = [7]. The integral in equation (8) can be evaluated numerically and Fig. shows P(N β (t) = n) for three different values of β. The Monte Carlo simulation of the fractional Poisson process is based on the algorithm presented in equation (2) of Ref. [4]. As a consequence of Kolmogorov s extension theorem, in order to fully characterize the stochastic process N β (t), one has to derive its finite dimensional distributions. A further requirement on the process paths uniquely determines the process, namely that they are right-continuous step functions with left limits [8]. The finite-dimensional distributions are the multivariate probability distribution functions P(N β (t ) = n, N β (t 2 ) = n 2,..., N β (t k ) = n k ) with t < t 2 <... < t k and n n 2... n k. We have already given the formula for the one-point functions in Eq. (7). The general finite dimensional distribution can be computed observing that the event {N β (t ) = n, N β (t 2 ) = n 2,..., N β (t k ) = n k } is equivalent to { < T n < t, T n+ > t, t < T n2 < t 2, T n2+ > p-2
Full characterization of the fractional Poisson process P(N β (t ) = n, N β (t 2 ) = n 2 ) = P(N β (t ) = n, N β (t 2 ) N β (t ) = n 2 n ) and, as a consequence of the inition of conditional probability Y n Y n,n2 Y n,n2,n3 T T T 2 T 3 T 4 T 5 t t 2 t 3 τ τ 2 τ 3 τ 4 τ 5 t P(N β (t ) = n, N β (t 2 ) N β (t ) = n 2 n ) = P(N β (t 2 ) N β (t ) = n 2 n N β (t ) = n ) P(N β (t ) = n ). () For β =, when the fractional Poisson process coincides with the standard Poisson process, the increments are iid random variables and one has Figure 2: (Color online) Pictorial illustration of the random variables used in the text. The light blue dots represent the observation points t, t 2 and t 3. The red squares are the epochs T =, T,..., T 5. The conditional residual life-time is the time elapsed between t i and the next epoch T ni +. It depends on previous values of n i, this is the number of events between and t i, with the event at t = T = not considered. Here, we have n =, n 2 = 2 and n 3 = 4. All the equations in this paper can be derived by analyzing this figure. t 2,..., t k < T nk < t k, T nk + > t k }. Therefore, we find P(N β (t ) = n, N β (t 2 ) = n 2,..., N β (t k ) = n k ) = P( < T n < t, T n+ > t, t < T n2 < t 2, T n2+ > t 2, t2 u u 2..., t k < T nk < t k, T nk + > t k ) = t du f n τ (u ) t u u 2 tk 2k 2 i ui... du 3 f (n2 n ) τ (u 3 ) t k 2k 2 t u du 2 f τ (u 2 ) t 2 u u 2 u 3 du 4 f τ (u 4 ) du 2k fτ (nk nk ) (u 2k ) i ui ( )] 2k [ F τ t k u i. (9) i For instance, the two point function is given by P(N β (t ) = n, N β (t 2 ) = n 2 ) = P( < T n < t, T n+ > t, t < T n2 < t 2, T n2+ > t 2 ) = t du f n τ (u ) t2 u u 2 t u u 2 t u du 2 f τ (u 2 ) du 3 f (n2 n ) τ (u 3 ) [ F τ (t 2 u u 2 u 3 )]. () Let us focus on the two-point case for the sake of illustration. As N β (t) is a counting process, one has P(N (t 2 ) N (t ) = n 2 n N (t ) = n ) = P(N (t 2 ) N (t ) = n 2 n ) = exp( (t 2 t )) (t 2 t ) (n2 n). (2) (n 2 n )! On the contrary, for < β <, the increment N β (t 2 ) N β (t ) and N β (t ) are not independent. Note that N β (t ) can be seen as an increment as N β () = by inition. However from Eq. (), the conditional probability of having n 2 n epochs in the interval (t, t 2 ) conditional on the observation of n epochs in the interval (, t ) can be written as a ratio of two finite dimensional distribution: P(N β (t 2 ) N β (t ) = n 2 n N β (t ) = n ) = P(N β (t ) = n, N β (t 2 ) = n 2 ). (3) P(N β (t ) = n ) This probability can be evaluated by means of an alternative method, more appealing for a direct and practical understanding of the dependence structure. Let Y n = [T n+ t N β (t ) = n ] (4) denote the residual lifetime at time t (that is the time to the next epoch or renewal) conditional on N β (t ) = n. With reference to Fig. 2, one can see that the conditional probability P(N β (t 2 ) N β (t ) = n 2 n N β (t ) = n ) is given by the following convolution integral for n 2 n P(N β (t 2 ) N β (t ) = n 2 n N β (t ) = n ) = t2 t P(N β (t 2 t y) = n 2 n )f Yn (y) dy, (5) where f Yn (t) is the pdf of Y n. In the case n 2 n =, one has P(N β (t 2 ) N β (t ) = N β (t ) = n ) = F Yn (t 2 t ) (6) where F Yn (y) is the cdf of Y n. The distribution of the conditional residual lifetime Y n can be evaluated in several ways. For instance, one can notice that it can be decomposed as follows Y n = τ n+ + U n (7) p-3
Mauro Politi,3 Taisei Kaizoji Enrico Scalas2,3 where U n is ined as and the predictive probabilities can be evaluated as U n = [T n N β (t ) = n ], (8) and is the position of the last epoch before t conditional on N β (t ) = n, and τ n+ = [τ n+ t T n+ > t ] (9) is the difference between τ n+ and t conditional on T n+ > t. The pdf of U n is given by the following chain of equalities f Un (t)dt =P(T n dt N β (t ) = n ) =P(T n dt T n < t, T n + τ n+ > t ) =P(T n dt T n < t, τ n+ > t T n ) = P(T n dt) t P(τ t n + dw) P(T n < t, τ n+ > t T n ) = f τ n (t)[ F τ (t t)]dt n dufτ (u)[ F τ (t u)], t (2) where we used the independence between T n and τ n+ ( ) and f Tn (x) = fτ n (x) ( ). The pdf of τ n+ is f τn + (t U n )dt =P(τ n+ t dt T n+ > t ) = P(τ n + dt + t ) P(τ n+ > t U n ) = f τ (t + t )dt F τ (t U n ). From Eq. (7), one can write that f Yn (t) = t and this equation leads to f Yn (t) = (2) f τn + (t u u)f U n (u)du (22) t duf n t duf n τ τ (u)f τ (t + t u) (u)[ F τ (t u)] (23) that, together with Eq. (7), gives us the probability of the conditional increments in Eq. (5). Notice that, for n =, one has fτ (u) = δ(u) and Eq. (23) reduces to the familiar equation for the residual life-time pdf in the absence of previous renewals f Y (t) = f τ (t + t ) F τ (t ). (24) This method can be applied to the general multidimensional case. As in Eq. () we can write P(N β (t ) = n,..., N β (t k ) = n k, N β (t k+ ) = n k+ ) = P(N β (t k+ ) N β (t k ) = n k+ n k N β (t ) = n,..., N β (t k ) = n k ) P(N β (t ) = n,..., N β (t k ) = n k ) (25) P(N β (t k+ ) N β (t k ) = n k+ n k... tk+ t k where we ined N β (t ) = n,..., N β (t k ) = n k ) = P(N β (t k+ t k y) = n k+ n k ) f Yn,...,n k (y)dy, (26) Y n,...,n k = [T nk + t k N β (t ) = n,..., N β (t k ) = n k ]. (27) Again, we can use a decomposition of Y n,...,n k where and U nk Y n,...,n k = τ nk + + U nk, (28) = [T nk N β (t ) = n,..., N β (t k ) = n k ], (29) τ nk + = [τ nk + t k T nk + > t k ]. (3) The difference with the two-point case is that U n = [T n N β (t ) = n ] = [ n i τ i N β (t ) = n ] must be replaced by U nk = t k + Y n,...,n k + n k i=n k + τ i N β (t k ) = n k. (3) The time between t k and the next renewal epoch is Y n,...,n k and it is independent from n k i=n k + τ i. Therefore, the convolution q(n,..., n k ; t) = f Yn,...,n k f (n k n k ) τ (t) (32) replaces fτ n (t) in Eq. (2). This leads to f Unk (z) = q(n,..., n k ; t + t k )[ F τ (t k t)] tk t k q(n,..., n k ; u + t k )[ F τ (t k u)]du. (33) On the other hand, f τnk +(t) has the same functional form as f τn (t) given in Eq. (2) with U + n k replacing U n. Therefore, Y n,...,n k has the following pdf f Yn,...,n k (t) = tk t k du q(n,..., n k ; u + t k )f τ (t + t k u) tk t k du q(n,..., n k ; u + t k )[ F τ (t k u)]. (34) In practice, the random variable Y n,...,n k carries the memory of the observations made at times t,..., t k ; the knowledge of f Yn,...,n k allows the computation of f Yn,...,n k, and, via Eqs. (25) and (26), the k + - dimensional distribution can be derived as well. p-4
Full characterization of the fractional Poisson process β=.5, t β=.5, t β=.8, t.9 β=.5, t β=.5, t, n.9 β=.5, t β=.7, t =7, n.9 β=.8, t β=.8, t, n.8 β=.5, t, n β=.5, t.8 β=.5, t =7, n β=.5, t.8 β=.8, t, n β=.8, t Probability density function of Un.7.6.5.4.3 Probability density function of Un.7.6.5.4.3 Probability density function of Un.7.6.5.4.3.2.2.2....5.5 2 2.5 3 3.5 4 4.5 5 2 3 4 5 6 7.5.5 2 2.5 3 3.5 4 4.5 5 β=.8, t.9 β=.8, t β=.8, t.9.9.8 β=.8, t β=.8, t.8.8 Probability density function of Un.7.6.5.4.3 Probability density function of Un.7.6.5.4.3 Probability density function of Un.7.6.5.4.3.2.2.2... 2 3 4 5 6 7.5.5 2 2.5 3 3.5 4 4.5 5 2 3 4 5 6 7 Figure 3: (Color online) Pdf of the random variable U n as given in Eq. (2) (solid black lines) compared to Monte Carlo simulations (colored step lines) for three values of β and two different values of t. 7 different paths were simulated for each value of β and the bin width is.5. is in arbitrary units. p-5
Mauro Politi,3 Taisei Kaizoji Enrico Scalas2,3.4 β=.5, t =.4 β=.5, t = β=.8, t =.35 β=.5, t β=.5, t, n.35 β=.5, t β=.5, t =7, n.9 β=.8, t β=.8, t, n Probability density function of Yn.3.25.2.5 β=.5, t β=.5, t, n β=.5, t Probability density function of Yn.3.25.2.5 β=.5, t β=.5, t =7, n β=.5, t Probability density function of Yn.8.7.6.5.4.3 β=.8, t β=.8, t, n β=.8, t...2.5.5...2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9 β=.8, t = = =.9 β=.8, t β=.8, t.9.9.8 β=.8, t β=.8, t.8.8 Probability density function of Yn.7.6.5.4.3 β=.8, t Probability density function of Yn.7.6.5.4.3 Probability density function of Yn.7.6.5.4.3.2.2.2.....2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9 Figure 4: (Color online) Pdf of the random variable Y n as given in Eqs. (23) and (24) (solid black lines) compared to Monte Carlo simulations (colored step lines) for three values of β and two different values of t. 7 different paths were simulated for each value of β and the bin width is.. is in arbitrary units. p-6
Full characterization of the fractional Poisson process Figs. 3 and 4 compare the theoretical results of Eqs. (2), (23) and (24) with those of a Monte Carlo simulation based on the algorithm presented in equation (2) of Ref. [4]. The Japanese Society for the Promotion of Science (grant N. PE943) supported MP during his stay at the International Christian University in Tokyo, Japan. References [] Bortkiewicz L. J., Das Gesetz der kleinen Zahlen (B. G. Teubner) 898. [2] Margolin G. and Barkai E., Physical Review Letters, 94 (25) 86. [3] Margolin G., Protasenko V., Kuno M. and Barkai E., Journal of Physical Chemistry B, (26) 953. [4] Barabási A.-L., Nature, 435 (25) 27. [5] Barabási A.-L., Bursts: The Hidden Pattern Behind Everything We Do (Dutton) 2. [6] Scalas E., Gorenflo R., Luckock H., Mainardi F., Mantelli M. and Raberto M., Quantitative Finance, 4 (24) 695. [7] Scalas E., Kaizoji T., Kirchler M., Huber J. and Tedeschi A., Physica A, 366 (26) 463. [8] Laskin N., Communications in Nonlinear Science and Numerical Simulation, 8 (23) 2. [9] Scalas E., Gorenflo R. and Mainardi F., Physical Review E, 69 (24) 7. [] Mainardi F., Gorenflo R. and Scalas E., Vietnam Journal of Mathematics, 32 (24) 53. [] Metzler R. and Klafter J., Journal of Physics A: Mathematical and General, 37 (24) R6. [2] Heinsalu E., Patriarca M., Goychuk I., Schmid G. and Hänggi P., Physical Review E, 73 (26) 4633. [3] Magdziarz M., Weron A. and Weron K., Physical Review E, 75 (27) 678. [4] Fulger D., Scalas E. and Germano G., Physical Review E, 77 (28) 222. [5] Germano G., Politi M., Scalas E. and Schilling R. L., Physical Review E, 79 (29) 662. [6] Meerschaert M. M., Nane E. and Vellaisamy P., http://arxiv.org/abs/7.55, (2). [7] Beghin L. and Orsingher E., Electronic Journal of Probability, 4 (29) 79. [8] Billingsley P., Probability and Measure 2nd Edition (John Wiley & Sons) 986. p-7