STOCHASTIC COAGULATION-FRAGMENTATION PROCESSES WITH A FINITE NUMBER OF PARTICLES AND APPLICATIONS

Size: px
Start display at page:

Download "STOCHASTIC COAGULATION-FRAGMENTATION PROCESSES WITH A FINITE NUMBER OF PARTICLES AND APPLICATIONS"

Transcription

1 Submitted to the Annals of Applied Probability STOCHASTIC COAGULATION-FRAGMENTATION PROCESSES WITH A FINITE NUMBER OF PARTICLES AND APPLICATIONS By Nathanael Hoze and David Holcman,, Institute of Integrative Biology, ETH Zurich, Zurich, Switzerland Newton Institute and Department of Applied Mathematics and Theoretical Physics (DAMPT), University of Cambridge, CB30DS United Kingdom and Ecole Normale Supérieure, 46 rue d Ulm Paris, France Coagulation-fragmentation processes describe the stochastic association and dissociation of particles in clusters Cluster dynamics with cluster-cluster interactions for a finite number of particles has recently attracted attention especially in stochastic analysis and statistical physics of cellular biology, as novel experimental data are now available, but their interpretation remains challenging We derive here probability distribution functions for clusters that can either aggregate upon binding to form clusters of arbitrary sizes or a single cluster can dissociate into two sub-clusters Using combinatorics properties and Markov chain representation, we compute steady-state distributions and moments for the number of particles per cluster in the case where the coagulation and fragmentation rates follow a detailed balance condition We obtain explicit and asymptotic formulas for the cluster size and the number of clusters in terms of hypergeometric functions To further characterize clustering, we introduce and discuss two mean times: one is the mean time two particles spend together before they separate and the other is the mean time they spend separated before they meet again for the first time Finally we discuss applications of the present stochastic coagulation-fragmentation framework in cell biology Introduction Clustering appears in various areas of science such as astrophysics, where masses can form aggregate under gravitation, biochemistry where molecules have to meet to react, colloids that aggregate in solution or ecology where prey-predator have to meet to stabilize populations A century ago, Von Smoluchowski [29] described an irreversible aggregation of many particles in clusters Later on, a set of coagulation-fragmentation equations was proposed by Becker-Döring when clusters can lose or gain only one particle at a time [6, 7, 2, 33] Nowadays, continuous limit analysis [3, 26], David Holcman is supported by a Marie Curie Award and a Simons fellowship He thanks the Institute of Mathematics, Oxford University, for hospitality MSC 200 subject classifications: Primary 60J20, Markov chain, Coagulationfragmentation processes, Stochastic processes; secondary 05A7

2 2 N HOZE AND D HOLCMAN determinist, stochastic, asymptotical and numerical methods are developed to study clustering and estimate the number of clusters [2, 9, 32, 27, 8] and their sizes The coagulation models mentioned above have been extended for cluster growth with a finite number of particles [23, 24], known as Marcus- Lushnikov process, and more recently for discrete coagulation-fragmentation models with an infinite number of particles [5] Many statistical and probabilistic studies have been made on the Marcus-Lushnikov process [8], however, much less is known about the statistical properties for the coagulationfragmentation of a finite number of particles [3] Of particular interest in stochastic biology are models of coagulation-fragmentation where the cluster size cannot exceed a given threshold [4, 34] These models are used in genetics to describe the organization in clusters of the chromosome ends [4] or to model viral capsid assembly in cells [35, 6, 7] This article presents several exact and asymptotic results and it aims to attract attention toward developments of applied probability with direct applications to interpret data We recall that the Smoluchowski equations for coagulation-fragmentation consist of an infinite system of differential equations for the number n j (t) of clusters of size j at time t in a population of infinite size [29] The index j can take values between and and () dn j (t) dt j A(k, j k)n k (t)n j k (t) n j (t) A(j, k)n k (t) 2 k0 k k 2 n j j(t) B(k, j k) + B(j, k)n k+j (t), k where the first line in the right-hand side corresponds to the coagulation and the second accounts for the fragmentation The coagulation kernel A(i, j) is the rate at which two clusters of size i and j coalesce to form a cluster of size i + j, while the fragmentation kernel B(i, j) is the rate at which a cluster of size i + j dissociates into a cluster of size i and a cluster of size j When cluster sizes cannot exceed a threshold M, the system of equations () is truncated and the coagulation kernel is A(i, j) 0 if i + j M [36, 37] This system of equations is a mean-field deterministic model of the coagulation-fragmentation process that describes a discrete number of particles aggregating and dissociating, but it does not allow a complete analysis of the cluster distribution in the small size limit [3] The goal of this paper is to study a stochastic system of coagulationfragmentation with a limited number of particles We present exact soluimsart-aap ver 204/0/6 file: HozeCoagulation_imsAPRIL207-dhnhdhtex date: May, 207

3 STOCHASTIC COAGULATION-FRAGMENTATION 3 tions and expressions for the distribution, statistical moments of clusters and number of particles per cluster, using the following generic rules of coagulation-fragmentation: a cluster of size i + j can give rise to two clusters of size i and j at a rate F (i, j), and two clusters of size i and j form a new cluster of size i + j at a rate C(i, j) We focus our analysis on coagulation-fragmentation processes (CFP) that verify the detailed balance condition [2], for which there exists a function a(i) a i such that i, j N (2) C(i, j) a(i + j) F (i, j) a(i)a(j) The detailed balance condition ensures that, at equilibrium, each elementary coagulation or fragmentation process is equilibrated by its reciprocal process Using exact formulas for probabilities for these configurations when the total number of clusters is fixed, we compute the probability distribution function of the number of clusters We also compute the probability distribution that the number of cluster of size i is m i so that the distribution of sizes of the ensemble of clusters is (m,, m n ) When there are exactly N particles and the total number of clusters is fixed to K, we have the following identity for number conservation (3) N m i K i We shall show that when the total number of clusters is K, the conditional probability distribution function is given by (4) p (m,, m N K) a() m a(n) mn, m!m N! where the normalization constant can be computed explicitly (see formula 26) We will use this formula to compute the statistical moments for the cluster distributions Using a combinatorial approach, we present several explicit formula for the steady-state distributions of clusters Some of the results presented here were previously announced without proofs in the short letter [4] The paper is organized as follows In section, we present the stochastic model of coagulation-fragmentation for N independent particles, that we analyze by a Markov chain We obtain explicit formula for the cluster configuration combinatorics using the partition of the integer N, for the distribution of particles in clusters We derive the time evolution equations

4 4 N HOZE AND D HOLCMAN for the number of clusters These equations represent a novel Markov chain, which allows us to determine the number of clusters at steady-state By combining the expression for number of clusters with the distribution of cluster in configuration conditioned on the number of clusters, we obtain the distribution of the particles in clusters In section 3, we introduce two characteristic times for studying the time distribution two particles are in a cluster These two times characterize the dynamics of exchange of particles between clusters: the first one is the mean time that two particles spend together before they separate, and the second is the mean time that they spend separated before they meet again for the first time The colocalization probability of two particles is defined as the fraction of time, the particles spend together Sections 4-6 provide direct applications of these results to specific CFPs: we focus specifically on the case of constant coagulation and fragmentation kernels in section 4, for which we obtain analytical expressions for the clusters distributions We also obtain several formula when the cluster size is limited In this article, to determine statistics of the cluster distribution, we alternatively study two different systems First, we study the distribution of clusters using the integer partition of the total number of particles We obtain the probability distribution of cluster configurations In this process, we use the term coagulation when two clusters of a given size coalesce and form a new cluster We use the term fragmentation to describe the separation of a cluster into two smaller ones The coagulation and fragmentation kernels that we have chosen here allow us to perform another analysis: when the number of clusters is fixed, the overall rates of coagulation and fragmentation are independent of the configurations of the clusters We will study aggregation-fragmentation, where the number of clusters is known In that case, we shall use the following terminology: formation describes the change when a distribution of K becomes K clusters We use separation to describe the process by which a distribution of K clusters is transformed into a distribution of K + clusters Coagulation-fragmentation with a finite number of independent particles Stochastic coagulation-fragmentation equations for a finite number of particles To describe the steady-state distribution for a CFP stochastic model with a finite number of N particles, we shall use a continuous-time Markov chain in the space of cluster configurations The N particles distributed in clusters of size (n,, n N ) can undergo coagulation or fragimsart-aap ver 204/0/6 file: HozeCoagulation_imsAPRIL207-dhnhdhtex date: May, 207

5 STOCHASTIC COAGULATION-FRAGMENTATION 5 mentation events under the constraint that (5) N n k N k To study the distribution of particles in clusters, we use the decomposition of the integer N as the sum of positive integers (integer partition) [4] The partitions of the integer N are described in N dimensions by the ensemble (6) P N {(n,, n N ) N N ; N n i N and n n N 0} i The probability P (n,, n N, t) of the configuration (n,, n N ) at time t satisfies an ensemble of closed equations Indeed, by considering all the possible coagulation or fragmentation events, the Master equation is obtained by considering the events occurring between time t and t + t: Two clusters of size n i and n j coagulate with a probability C(n i, n j ) t to form a cluster of size n i + n j A cluster of size n i dissociates into two clusters of size k and n i k with a probability F (k, n i k) t Nothing happens with the probability N N i ji+ C(n i, n j ) t N ni i k F (k, n i k) t Thus, the Master equations are N d dt P (n,, n N, t) + N N i ji+ k n i >0,n j >0 n i +n j n k C(n i, n j ) + N i n i k F (k, n i k) P (n,, n N, t) C(n i, n j)p (n,, n i,, n j,, n N, t) (7) + N N i ji+ F (n i, n j )P (n,, n i + n j,, n N, t) Moreover, C(n i, n j ) 0 if either n i or n j is equal to 0 We now introduce an ensemble in N N which consists of integer decompositions of clusters with a given size: (8) N P N {(m,, m N ) N N ; im i N and m,, m N 0} i

6 6 N HOZE AND D HOLCMAN In the ensemble P N, m i is the number of occurrence of the integer i in the partition of the integer NThe two ensembles P N and P N correspond to different representations of the clusters distributions For example, when there are N 9 particles, distributed in two clusters of one particle, two clusters of two, and one cluster of three Then the distributions are (3, 2, 2,,, 0, 0, 0, 0) P 9, and (2, 2,, 0, 0, 0, 0, 0, 0) P 9 A sufficient condition to obtain an invariant measure of the steady state probability is the reversibility of the CFP [20, 22] where the coagulationfragmentation kernel satisfies the detailed balance condition: there exists a function a(i) a i such that [2] (9) C(i, j)a i a j F (i, j)a i+j The functions a i characterize the ratio of the coagulation and fragmentation rates, and any function of the form a i αi a i with α 0 satisfies the above condition This condition insures the reversibility of the Markov chain and guarantees the existence of an invariant measure [2], where the steady-state probability of a given configuration (m,, m N ) P N is (0) P (m,, m N ) a m am N N C N m!m N!, where C N is a normalization constant An explicit computation of the normalization constant is difficult [3] Here, we propose to estimate the probability of occurrence of a certain cluster configuration (m,, m N ) by limiting the study to the configurations of a given number of particles At this stage, we shall explain the rational for computing expression (0) This expression is the distribution at equilibrium of particles in clusters, where the dissociation (resp association) rate is proportional to the number of elements (minus one) (resp the number of pairs of particles) The equilibrium probability distributions associated to the Markov chain configuration (m,, m N ) is computed from analyzing the transition between the two neighboring states (m,, m i,, m j,, m i+j +,, m N ) and (m,, m N ) It is obtained first from the coagulation rate ψ(i, j) of a cluster of size i with one of size j, given the distribution (m,, m N ) The rate ψ(i, j) is given by () (2) ψ(i, j) 2 C(i, j)m im j if i j C(i, i)m i (m i ) otherwise The factor 2 accounts for the symmetric cases ψ(i, j) and ψ(j, i) The fragmentation rate ϕ(i, j) from i+j to (i, j), that accounts for the transition

7 STOCHASTIC COAGULATION-FRAGMENTATION 7 from the configuration (m,, m i,, m j,, m i+j +,, m N ) to (m,, m N ) (Fig ) is (3) ϕ(i, j) F (i, j)(m i+j + ) Thus, the stationary probability π satisfies the relation π (m,, m i, m j, m i+j +,, m N ) π (m,, m N ) (4) ψ(i, j) ϕ(i, j) C(i, j) m i m j 2 F (i, j) m i+j + a i a j m i m j 2 m i+j + a i+j A direct computation shows that the probability P (m,, m N ) defined in eq (0) satisfies equation (4) (m,, m N ) (m,, m i k +,, m k +,, m i,, m N ) Fragmentation F (k, i k)m i rkov chain representation of the transitions Starting with a configuration N ), m i is the number of clusters of size i and N is the total number of particles entation rate of a cluster of size i into one cluster of size k and one of size i k k)m i, while the rate of formation of a cluster of size i + j from two clusters of j is equal to C(i, j)m i m j luster partitions with a finite number of particles To determine er distribution at equilibrium, we compute here the probability of ration when the number of clusters K is fixed We also find the ity of having K clusters The number of distributions of N particles lusters is the cardinal of the ensemble {(n,, n K ) N K ; K n i N and n n K }, i p ver 204/0/6 file: HozeCoagulation_imsAPRIL207-dhnhdhtex date: May, 207

8 8 N HOZE AND D HOLCMAN which is also the ensemble of the partitions of the integer N as a sum of K integers This ensemble is in bijection with N (6) P N,K {(m,, m N ) N N ; im i N and i N m i K}, i where the transformation P N,K P N,K defined by K K (7) (n,, n K ) (m,, m N ) ( {ni },, i i {ni N}) maps the partition (n,, n K ), where N is written as a sum of K positive integers, to the number of occurrence of each integer into the image partition The partitions of N are written as (8) P N K P N,K and P N K P N,K In sections 4, 5 and 6, we derive explicitly expressions for the probabilities of configurations in P N,K 3 Statistical moments for the cluster configurations when the number of clusters is fixed We show now that the probability of configuration (m,, m N ), when the total number of clusters is equal to K, is given by (9) where (20) p (m,, m N K) (m i ) P N,K a m am N N m!m N!, a m am N N m!m N! The normalization factor of eq (9) is computed using the following result: Remark We consider the functions (2) S(x) a i x i i

9 and the partial sums STOCHASTIC COAGULATION-FRAGMENTATION 9 (22) S N (x) N a i x i i The K th power of these functions, that is S K and SN K have the same N th order coefficient and this coefficient determines Moreover, the function (23) g(x, y) exp(s(x)y) is a generating function of the Proof:The number of configurations (m,, m N ) that satisfy the conditions i m i K and i im i N is the N th -order coefficient of the multinomial expansion (a x + + a N x N ) K (24) (m,,m N ), m i K K! m! m N! (a x ) m (a N x N ) m N Using the N-tuple (x, x 2,, x N ) (X, X 2,, X N ), we obtain (25) (a X + + a N X N ) K K! (m,,m N ), m i K i a m i i m i! X i im i We can group the terms by the exponents of X, which are equal to i im i In particular, for a partition (m,, m N ) P N,K, the exponent is i im i N, and the N th order coefficient in expression (25) is equal to a m i i (26) m! m N! More generally, (m,,m N ) P N,K (27) S K (X) K! nk (m,,m N ) P n,k a m i i m! m N! Xn (28) nk (m,,m N ) P n,k C n,k X n

10 0 N HOZE AND D HOLCMAN It follows that the function defined by (29) (30) (3) is a generating function of the g(x, Y ) exp(s(x)y ) S K (X) Y K K! K0 C n,k X n Y K K0 nk Remark 2 The coefficients defined by relation (20) satisfy the induction formula: (32) with (33) (N + )C N+,K N K+ k0 C N,N an N! C N, a N (k + )a k+ C N k,k Proof:We start with the formulas (33) The coefficient C N,N is obtained from the unique partition in P N,N, which is given by m N and m i 0 if i > Therefore, C N,N an N! The coefficient C N, is obtained from the partition m N and m i 0 if i < N, and thus C N, a N We now prove relation (32) Differentiating the function σ(x) SK (x) K!, we get (34) σ (x) S (x) SK (x) (K )! We evaluate the lhs of (34) using eq (28), and obtain ( ) σ (x) C n,k x n (35) nk nk (n + )C n+,k x n x K (n + K)C n+k,k x n n0

11 STOCHASTIC COAGULATION-FRAGMENTATION We evaluate the rhs of (34) using the definition of S (2) and we obtain ( ) ( ) σ (x) (i + )a i+ x i C i,k x i (36) i0 ik ( n ) x K (k + )a k+ C n k+k,k x n n0 k0 Thus, by equalizing the N th order coefficient of σ (x), we have (37) (N + )C N+,K N K+ k0 (k + )a k+ C N k,k Next, we estimate various moments when the number of clusters is fixed We summarize the main result in the Theorem When the number of clusters is equal to K for a total of N particles, the mean number of clusters of size i is (38) M i N,K a i C N i,k, where a i and are defined in (9) and (20) respectively Proof:The mean number of clusters of size i when the total number of clusters is K is given by the following sum (39) (40) M i N,K P N,K m i p (m,, m N ) P N,K P N,K,m i>0 a m m am N N i m!m N! a m a am i i a m N N i m!(m i )!m N!, where the subscript P N,K, m i > 0 in the sum characterizes the partitions in P N,K containing at least one occurrence of the integer i By considering the partitions of P N,K where i appears at least once and removing one i, we obtain exactly the partitions of P N i,k, except for the number of occurrence of i, where the corresponding partitions in both sets have the same number of repetitions, ie (4) j i, m j in (m,, m N i ) P N i,k

12 2 N HOZE AND D HOLCMAN is equal to (42) m j in (m,, m N ) P N,K and m i in (m,, m N i ) P N i,k is equal to m i + in (m,, m N ) P N,K There are no clusters larger than N i in the partitions (m j 0 for j > N i for (m,, m N ) P N,K, m i > 0) Thus M i N,K a i P N,K,m i>0 m a P N i,k a m a am i i a m N N i m!(m i )!m N! am i i a m N N m!m i!m N! (43) We further have: C N i,k a i Remark 3 M i N,K 0 if i > N K + Proof:When the N particles are distributed in K clusters, the largest cluster contains at most N K + particles The corresponding partition in P N,K is given by m K, m N K+, and m j 0 otherwise In the following, we determine the second moment of the number of clusters of size i (44) M 2 i N,K m 2 i P N,K a m am N N m!m N! and the covariance of the number of clusters of size i and j (45) M 2 i,j N,K P N,K a m m i m am N N j m!m N! Theorem 2 The second moment of the number of clusters of size i is (46) and the covariance is M 2 i N,K a 2 i C N 2i,K 2 + a i C N i,k, (47) M 2 i,j N,K a i a j C N i j,k 2

13 STOCHASTIC COAGULATION-FRAGMENTATION 3 Proof:The variance of the number of clusters of size i is given by Mi 2 N,K (48) m i P N,K m i P N,K m 2 i m i P N,K,m i> a m am N N m!m N! [m i (m i ) + m i ] am am N N m!m N! a 2 i a m a m i 2 i a m N N m! (m i 2)! m N! + M i N,K Using an argument similar to the proof of Theorem, we obtain M 2 i N,K m i P N 2i,K 2 a 2 i a m a m i i a m N N m! m i! m N! + M i N,K (49) a 2 i C N 2i,K 2 + a i C N i,k The covariance of the number of clusters of size i and j is obtained from the term (50) M 2 i,j N,K P N,K which, by the same reasoning, leads to a m m i m am N N j m!m N!, (5) M 2 i,j N,K a i a j C N i j,k 2 2 Distribution of the number of clusters In the previous section, we computed the probability distribution of a cluster configuration and determined the statistical moments for a fixed number of clusters In this section, we study the statistics of the entire cluster configurations We focus here on the probability distribution of the number of clusters and we shall compute the time dependent probability density function (52) P K (t) P {K clusters at time t},

14 4 N HOZE AND D HOLCMAN which is a birth-and-death process that we investigate using a Markov chain The probability of having K clusters at time t + t is the sum of the probability of starting at time t with K clusters and one of them dissociates into two smaller ones plus the probability of starting with K + clusters and two of them associate plus the probability of starting with K and nothing happens (Fig 2) The first probability is the product of P K by the f N f N- f N-2 f N-3 N N- N-2 N-3 c N- c N-2 c N-3 c N-4 Fig 2 Markov chain representation for the number of clusters s K (respectively f K) is the separation (respectively formation) rate of a cluster when there are K clusters transition rate s K t to go from state with K clusters to K, while the second is the transition from K + to K, which is the product of P K+ by the transition rate f K+ t of going from K + clusters to K Thus the master equations are given by (53) P (t) P K (t) P N (t) s P (t) + f 2 P 2 (t) (f K + s K )P K (t) + f K+ P K+ (t) + s K P K (t) f N P N (t) + s N P N (t) In section 4, 5 and 6, we will solve this system of equations explicitly at steady-state for particular formation and separation kernels We now derive a general formula for the steady state probability (54) Π K lim t P K (t) of having K clusters at steady state and express it in terms of the a i The steady state probabilities of the number of clusters are solution of the system (55) 0 s Π + f 2 Π 2 0 (f K + s K )Π K + f K+ Π K+ + s K Π K 0 f N Π N + s N Π N,

15 STOCHASTIC COAGULATION-FRAGMENTATION 5 with the normalization condition N (56) Π K K The probabilities Π K are given by the ratio (57) Π K Π K s K f K for K 2 and the coefficients s K and f K are the mean-field separation and formation rates respectively Whereas the cluster configurations when the number of clusters is fixed depend only on the kernel a i, the statistics of the number of clusters depend on the cluster fragmentation and coagulation rates F and C In the following, we will focus on the coagulation condition C(i, j) and the fragmentation F (i, j) a ia j a i+j to state the Theorem 2 When C(i, j) and F (i, j) a ia j a i+j, the separation rate when there are K clusters is given by N i i j s K a ja i j C N i,k (58) and the formation rate when there are K clusters is K(K ) (59) f K 2 Proof:The total dissociation rate d(n) of a cluster of size n is obtained by summing over all the possible sizes resulting from the dissociation and is given by (60) d(n) n F (i, n i) i n i a i a n i a n The rate at which a given configuration (m,, m N ) P N,K dissociates is N i d(i)m i The separation rate for K clusters is thus (6) s K N d(i)m i p (m,, m N ) P N,K i N d(i) M i N,K i

16 6 N HOZE AND D HOLCMAN Using relation (43), the separation rate becomes (62) s K N i d(i)a ic N i,k The separation rate of a distribution of K clusters eq (62) can thus be expressed as a function of the (63) s K N i i j a ja i j C N i,k For K, the only cluster is of size N and the separation rate is (64) s d(n) The formation rates are given by f K N m i (m i )C(i, i) + 2 P N,K i i j (65) m i m j C(i, j) p (m,, m N K) For a distribution of K clusters, this is equal to the number of cluster pairs (66) f K K(K ) 2 We are now in position to study the statistics of the entire cluster configurations Indeed, using Bayes rule, the probability of a configuration (m,, m N ), that contains K clusters is the product of the conditional probability p (m,, m N K) by the probability of having K clusters (67) p (m,, m N, K) p (m,, m N K)Π K The mean number of clusters of size i is thus (68) N M i N Π K M i N,K K 3 Invariant of clusters dynamics We introduce and compute here several measures of the cluster configurations that appear when the system is in a global steady state First, we compute the probability to find two particles in the same cluster, and second we measure two time scales associated to particle dynamics in an ensemble of clusters: ) the mean time that

17 STOCHASTIC COAGULATION-FRAGMENTATION 7 two particles spend together before they separate and 2) the mean time that they spend separated before they meet again for the first time (Fig 3) The probability that two given particles are together is of interest in several cell biology examples: for instance, some genes can be silenced if the telomeres that carry them are forming a cluster [38] Time T S Time T R Fig 3 Cluster formation and dissociation The time to separation T S is the mean time for two specific particles (black) to spend in the same cluster Before physical separation, the cluster can aggregate with other clusters (grey) or some particles except either of the two can be separated The time to association T R is the mean time the two particles meet again after separation for the first time 3 The probability to find two particles in the same cluster When the mean number of clusters has reached its equilibrium, particles can still be exchanged between clusters To characterize this exchange, we compute the probability to find two particles in the same cluster When the distribution of the clusters is (n,, n N ), the probability P 2 (n,, n N ) to find two given particles in the same cluster is obtained by using the probability to choose the first particle in the cluster n i, which is equal to the number of particles in the cluster divided by the total number of particles n i N The probability to have the second particle in the same cluster is n i N Summing over all possibilities, we get (69) P 2 (n,, n N ) K n i n i N N K N(N ) ( n 2 i N) i i This probability is similar to Simpson s diversity index [28], a measure frequently used to quantify the diversity of ecosystems We note that (70) N n 2 j j N i 2 m i i

18 8 N HOZE AND D HOLCMAN Thus, when the distribution (n,, n N ) contains K clusters, we use (n,, n K ) P N,K and obtain by summing over all configurations of K clusters K (7) p(n,, n K K) n 2 j (n,,n K ) P N,K j (72) (73) N p (m i ) j 2 m j (m i ) P N,K j N j 2 j (m i ) P N,K N j 2 M j N,K, j m j p (m i ) where M j N,K is the mean number of clusters of size j, when there are N particles distributed in K clusters (eq (38)) Taking into account all possible distributions of clusters, we obtain that the probability P 2 to find two particles in the same cluster is (74) N P 2 K (n,,n K ) P N,K P 2 (n,, n K )p(n i )Π K, which can be written, using expressions (69) and (73) as (75) P 2 N(N ) N N Π K K j j 2 M j N,K N This approach can be generalized to the probability of having n 2 particles together 32 Mean time for two particles to stay together in a cluster We define the mean time to separation (MTS) as the mean time between the arrival of two given particles in a cluster and their separation after dissociation of the cluster We first compute the probability of a configuration (m,, m N ) conditioned on having two particles in the same cluster We then derive the transition rates from this particular configuration to any of the states accessible by a single dissociation or association event (see Fig ) The accessible states are divided into two classes: first, the configurations for which the particles are not initially in the same cluster (separated state) and second the ones where they are together The current state is the configuration (m,, m N ), for which the particles are in the

19 STOCHASTIC COAGULATION-FRAGMENTATION 9 same cluster Upon a coagulation event the particles stay in the same cluster A dissociation event can either occur for a cluster that did not contain the tracking particles or for the cluster that contained the particles In the latter case, there are two possibilities: either the particles stay together after dissociation, or they are redistributed to two different clusters (separated state) The rate of dissociation from the ensemble (m,, m N ) to (m,, m i +,, m k i +,, m k,, m N ) is 2F (i, k i)m k if i k/2 and F ( k 2, k 2 )m k otherwise Proposition The probability that two particles in a cluster of size k, given a configuration (m,, m N ) P N,K separate after a dissociation is k p S (k; m,, m N ) k(k ) N i d(i)m i(2k i)f (i, k i) i (76) Proof:The probability that two particles in a cluster of size k separates after any dissociation event is equal to the product of the probability that the particles are in the cluster multiplied by the probability that this dissociation results in the effective separation of the particles The first probability is obtained by considering the total dissociation rate d(n) of a cluster of size n (see eq (60)) The probability that the cluster containing the two particles effectively dissociates is thus proportional to d(k) and is normalized by the total dissociation rate for the cluster configuration (m,, m N ) : (77) d(k) N i d(i)m i The probability of an effective separation P sep is the complementary of the probability that the two particles stay together When the two resulting clusters are of size i and k i, this probability is equal to (78) P sep i i(i ) + (k i)(k i ) k(k ) The probability that the two particles initially in a cluster of size k will be separated is thus equal to p S (k) (79) d(k) K i d(i)m i k i ( ) i(i ) + (k i)(k i ) F (i, k i) k(k ) d(k) k k(k ) N i d(i)m F (i, k i)i(2k i) i i

20 20 N HOZE AND D HOLCMAN The transition probability from the state (m,, m N ) to the separated one equals the sum of the probabilities p S (k) to be separated over all cluster sizes k, (80) P S (m,, m N ) K p S (k)m k To derive the MTS, we first write the transition matrix of the Markov chain representing the transitions between the configurations of separated and non-separated states, and second, we determine the mean transition times between the states Ordering the configurations (m,, m N ) by arbitrary indices I, we define the transition matrix T of size (q(n) + ) (q(n) + ), where q(n) is the total number of different partitions of the integer N The elements of T of the first q(n) rows and columns are the transition probabilities between states where the two particles are together, while index q(n) + represents the separated state For I, J (,, q(n)), the matrix entries [T ] I,J are either the coagulation or dissociation rates given above (see Fig ), while the transitions [T ] I,q(N)+ are equal to the rates to the separated state Because the separated state is absorbing, we finally set [T ] q(n)+,q(n)+ Additionally, the matrix is normalized into a stochastic matrix such that (8) q(n)+ I, J k [T ] I,J We now estimate the mean time the two particles stay together The mean time τ(m,, m N ) from a configuration (m,, m N ) to any configuration (m,, m N ) accessible by a single coagulation or fragmentation event is the reciprocal of the sum of all transition rates (82) ( N τ(m,, m N ) d(k)m k + k ) K(K ), 2 where K N i m i and the coagulation kernel is constant C(i, j) We represent the transition times in a vector τ τ(n, 0,, 0) τ(n 2,,, 0) (83) τ τ(0,, 0, )

21 STOCHASTIC COAGULATION-FRAGMENTATION 2 We shall now compute the vector t(n, 0,, 0) t(n 2,,, 0) (84) t t(0,, 0, ) which is the MTS for an ensemble of cluster configuration (m,, m N ): it is related to the vector τ by [25] (85) t T n τ n0 The vector t is computed using the matrices (86) A I q(n)+ T and (87) A [ A q(n)+ ], where A q(n)+ is the matrix A from which the q(n) + row and column were removed Equation (85) is equivalent to (88) t A τ The MTS averaged over the equilibrium configuration distribution is (89) T S p T t where p T [p (m,, m N )] (m,,m N ) and p (m,, m N ) is the probability distribution of the configuration (m,, m N ) when the two particles are in the same cluster Using Bayes theorem, the probabilities p (m,, m N ) are given by (90) p (m,, m N ) P 2(m,, m N )p(m,, m N ), P 2 where we recall that p(m,, m N ) is the probability of the configuration (m,, m N ), P 2 is the probability that two specific particles are in the same cluster (see computation eq (75)), and P 2 (m,, m N ) is the probability that the two particles are in the same cluster configuration (m,, m N ) The conditional probability p (m,, m N ) to select two particles in the same cluster, conditioned on the distribution (m,, m N ) is larger for clusters of larger sizes We conclude this section with the following

22 22 N HOZE AND D HOLCMAN Remark 4 The time that two particles spend separated T R can be similarly determined, only the absorbing state in the transition matrix is the state at which the two particles are in the same cluster Interestingly, the probability two particles are together is the fraction of time they spend together and is given by the ratio (9) P 2 T S T S + T R In the rest of the manuscript, we apply the previous analysis to three examples of coagulation-fragmentation with a finite number of particles 4 Example : the case a i a We consider the case of a constant kernel a i a To compute the separation and formation rates s K and f K, we use that F (i, j) a and C(i, j) This fragmentation kernel corresponds to the following model: a cluster of size n dissociates at a rate n i F (i, n i) (n )a and the sizes of the resulting clusters are uniformly distributed between and n When there are N particles and a total number of clusters K, the partition of clusters is denoted (n,, n K ) P N,K The total transition rate from a configuration of K to K + clusters is the sum over all possible dissociation rates (92) s K K (n i )a (N K)a i The formation rate is proportional to the number of pairs (93) f K K K i ji+ C(n i, n j ) K(K ) 2 Following relation (53), the steady-state probability Π K for the number of clusters of size K satisfies the time independent master equation s Π f 2 Π 2, (94) (f K + s K )Π K f K+ Π K+ + s K Π K, which leads to the relation (95) f N Π N s N Π N, Π K+ (2a) K (N )! K!(K + )!(N K )! Π

23 STOCHASTIC COAGULATION-FRAGMENTATION 23 Using the normalization condition K Π K, the probability Π can be expressed as a hypergeometric series (96) where (97) Π F ( N + ; 2; 2a), F (a; b; z) n0 (a) n z n (b) n n!, is Kummer s confluent hypergeometric function ([] pp ) and (98) (x) n x(x + )(x + n ) is the Pochhammer symbol The average number of clusters at steady state (99) µ (a) N KΠ K K ( ) d Π z F ( N + ; 2; z) dz z 2a The derivative of the Kummer s function is (00) d dz F (a; b; z) a b F (a + ; b + ; z) Finally the mean number of clusters is expressed as (0) µ (a) + a(n ) F ( N + 2; 3; 2a) F ( N + ; 2; 2a), + a(n )G, where we note G the function defined by (02) G F ( N + 2; 3; 2a) F ( N + ; 2; 2a) More generally, we introduce the functions G i defined by (03) G i F ( N + + i; 2 + i; 2a) F ( N + ; 2; 2a) Following the procedure presented above, all moments of the probability distribution Π K can be computed and the n th -order moment µ n is expressed using the operator H defined by (04) H(f)(z) d dz zf(z),

24 24 N HOZE AND D HOLCMAN by (05) µ n N n K n Π K H(n) ( F ( N + ; 2; z)) z 2a F ( N + ; 2; 2a) Using the differentiation formula for the hypergeometric function (00), the moments µ n can be written as (06) (07) µ n n k0 n k0 α n k (N )! (k + )!(N k)! 2k a k G k, αk n Π k+ G k, Π where the coefficients αk n are given by k! k/2 (k + j0 ( )j j)n + (j + ) n (k j)! αk n k! (k )/2 j0 ( ) j (k + j)n (j + ) n (k j)! if k is even, if k is odd, and α n 0 αn n We can thus obtain the variance of the number of clusters, given by V (a) µ 2 µ 2 a(n )G (a, N) a2 (N )(N 2)G 2 (a, N) (08) a 2 (N ) 2 G 2 (a, N) 4 Asymptotic formulas for the mean and variance of the cluster number We provide here approximations for the functions G n defined in eq (03) Kummer s function can be expressed in terms of generalized Laguerre polynomials (09) F ( n; b; z) n! L b n (z) (b) n where n is an integer The asymptotic behavior of Laguerre polynomials for large n, fixed x > 0 and α, is given by [30] L α n( x) n α π e x 2 x α ( exp 2 x(n + α + )( ) + 2 m/2 ν ) C ν (x)n ν/2 + O(n m/2 ),

25 STOCHASTIC COAGULATION-FRAGMENTATION 25 where m is a positive integer and C ν (x) is a regular and bounded functions for x > 0, independent of N Using the leading order term in the expansion for m 2, we get L α n( x) n α π (0) e x 2 x α ( exp 2 x(n + α + )( ) ) + C (x)n /2 + O(n ) 2 We can now evaluate G by using the asymptotic expansion for F ( N + ; 2; 2a) and F ( N + 2; 3; 2a) For large N, we have F ( N + ; 2; 2a) (N ) /4 N 2 e a ( exp 2 )( 2aN + O( ) ), π (2a) 3/4 N () and (N 2) 3/4 F ( N + 2; 3; 2a) N(N ) 2 e a ( exp 2 )( 2a(N /2) + O( ) ) π (2a) 5/4 N (2) Finally G (a, N) (3) 2a N exp 2 an exp ( 2 2a(N /2) 2 )( 2aN + O( ) ), N ( ) a 2N G (a, N) Similarly, the present computation can be generalized to obtain the asymptotic approximation G n for the functions G n, valid for large N and fixed n, ( ) (n + )! a (4) G n (a, N) exp n (2aN) n/2 2N In Figure 4, we compare the exact value of G (defined in 02) with the approximation G given by expression (3) The approximation is more accurate for intermediate values of a (see Fig 4B) In addition, the error function G G G has a discontinuity in the derivative for a 0 Indeed at a 0, the function G and G cross each other and thus G G has a singular derivative, shown by a cusp type behavior in Fig 4B Moreover, the function G is always decreasing with N, however its approximation G

26 26 N HOZE AND D HOLCMAN is non monotonic (Fig 4A) Finally, by using the asymptotic expression (3), we find the approximation of the number of clusters for large N, µ (a) + ( ) a (5) 2aN exp 2N To obtain a numerical approximation of G for small a, we used the finite continued fraction decomposition for the ratio of hypergeometric functions F [9], and obtain (6) G (a, N) F ( N + 2; 3; 2a) F ( N + ; 2; 2a) (N + )a/3 (N 2)a/6 (N + 2)a/0 (N 3)a/5 + a/(n )(2N 3) +a/(n ) We obtain the Taylor expansion of G for small a from the continued fraction, G (a, N) N + ( ) (N + )(N 2) (N + )2 a + + a ( ) (N + )(N 2)(N + 2) (N + )(N 2)2 (N + )3 + + a (7) + o(a 3 ) This expression is computed from the continued fraction G written (8) G (a) + F (a)a, f a + + F 2 (a) where the general term of the sequence F n is (9) F n (a) f n + F n+ (a)a

27 STOCHASTIC COAGULATION-FRAGMENTATION 27 A 0 0 ~ G -G ~ G G, G B N a a0 a00 a, Fig 4 Approximation of G (A) Plot of G (black) and approximation G (red, eq (3)) vs N for a, 0, 00 and, 000 (B) Comparison of the values of G and G, measured as G G G N

28 28 N HOZE AND D HOLCMAN with F n+ (a) O() G can also be written as (20) G (a) af + af af n To obtain a third-order Taylor expansion of G, we simply used F, F 2, F 3 and we have truncated the rest of continued fraction to obtain (2) The expansion of F is (22) G (a) F (a)a + F 2 (a)a 2 F 3 (a)a 3 + o(a 3 ) F (a) f + F 2 (a)a f ( F 2 (a)a + F 2 2 (a)a 2 ) + o(a 2 ), f ( f 2 ( f 3 a)a + f 2 2 (a)a 2 ) + o(a 2 ), f f f 2 a + (f f 2 f 3 + f f 2 2 )a 2 + o(a 2 ) Using F expansion into G, we finally get (23) G (a) f a + (f 2 + f f 2 )a 2 (f f 2 f 3 + f f f 3 )a 3 + o(a 3 ) 42 Statistics for the number of clusters of a given size We now compute several moments for the size of clusters when there are N particles Using relation (38), we first obtain the expression for the mean number of clusters of size n when there are K clusters M n N,K m n p (m i K) (m i ) P N,K (24) a C N n,k Determining this relation requires computing the normalizing constant given in eq (20) The normalizing constant is the Nth order coefficient of S K, where S is the generating function (2) (25) S(x) a i x i a x x i

29 STOCHASTIC COAGULATION-FRAGMENTATION 29 The coefficient is thus equal to the (N K)th order coefficient of a K (Remark ) We obtain this coefficient by differentiating N K K! ( x) K times ( x) K (26) and estimating the derivative at x 0 We obtain that K! a K K(K + ) (K + N K ) (N K)! a K (N )! K! (K )!(N K)! Thus, by combining (24) and (26), we obtain that the number of cluster of size n, when the N particles are distributed in K clusters, is (27) M n N,K (N n )!K!(N K)! (N )!(K 2)!(N n K + )!, which we remark to be independent of a The mean number of clusters of size n is obtained by summing over all possible configurations with K clusters, N (28) M n M n N,K Π K K (N n )! (N )! Using expression (95) for Π K, we obtain K K(K )(N K)! (N n K + )! Π K (29) M n 2a F ( N + + n; 2; 2a) if n < N, F ( N + ; 2; 2a) (30) and M N F ( N + ; 2; 2a) The mean number of clusters of size N is exactly equal to the probability Π (N) of having one cluster when there is N particles (see eq (96)) Indeed this is the only configuration where a cluster of size N can appear The mean number of clusters of size n is 2a Π (N) Π (N n), which means that it is given by the ratio of the probability of having one cluster when there are N particles over the probability of having one cluster when there are N n particles The number of clusters can also be written using the function G k defined in (03), (3) M n n ( ) k (2a)k+ n! (k + )! (n k)!k! G k k0 (32) n 2a ( ) k Π k+(n) Π (n) G k, k0

30 30 N HOZE AND D HOLCMAN where Π k (n) is the probability of having k clusters in a system of n particles To summarize this analysis, we plotted in Fig 5 the mean number of clusters of size n for N 5 particles A Number of clusters m <M > <M 2 > <M 3 > <M 4 > <M 5 > a B Number of clusters <Mn> a05 a5 a Cluster size n Fig 5 (A) Mean number of clusters of size n as a function of the parameter a for N 5 particles (equation (30)), and total number of clusters µ (B) Mean number of clusters M n as a function of the cluster size n, for a 005, a 05 and a 5 43 Probability to find two particles in the same cluster We now evaluate the probability to find two particles in the same cluster for a constant kernel When there are N particles, we first prove the following Lemma (33) where (34) N j M j N,K is given by relation (27) Proof:Using formula (27), we have (35) N j 2 M j N j j 2 M j N,K N + 2N N K K +, (N j )!K!(N K)! (N )!(K 2)!(N j K + )! N j j 2 (N j )!K!(N K)! (N )!(K 2)!(N j K + )! K(K )(N K)! (N )! N K+ j j 2 (N j )! (N j K + )!

31 STOCHASTIC COAGULATION-FRAGMENTATION 3 To determine the sum in eq (35), we introduce the sum (36) and prove now that (37) S N,K N K+ j j 2 (N j )! (N j K + )!, N! 2N K + S N,K (N K)! (K )K(K + ) We first obtain a recurrence relation between S N+,K and S N,K S N+,K where (39) and (40) A B N K+2 j N K+ j0 j 2 (N j)(n j)(n K j) (j + ) 2 (N j )(N j 2)(N K j) N K+ (N )! (N K + )! + (j 2 + 2j + )(N j )(N j 2)(N K j) j (N )! (N K + )! + S N,K + A + 2B, (38) N K+ j N K+ j (N j )(N j 2)(N K j) j(n j )(N j 2)(N K j) We first use a change of variable for A and find (4) A N 2 jk 2 N 2 jk 2 (K 2)! j(j )(j K + 3) j! (j K + 2)! N 2 jk 2 ( ) j K 2

32 32 N HOZE AND D HOLCMAN Using the binomial relation (42) n jk ( ) j k ( ) n +, k + we obtain (43) A ( ) N (K 2)! K (N )! (N K)!(K ) Similarly, we obtain for B (44) B N 2 jk 2 NA j! (N j ) (j K + 2)! N 2 jk 2 NA (K )! (j + )! (j K + 2)! N jk ( j ) K N! (N K)!(K ) N! (N K)!K N! (N K)!(K )K Finally we obtain the induction relation for S N,K N! S N+,K S N,K + 2 (N K)!(K )K + (N )! (N K)!(K ) (45) N!(2N K + 2) S N,K + (N K + )!(K )K + (N )! (N K + )!

33 STOCHASTIC COAGULATION-FRAGMENTATION 33 Using that S K,K (K 2)!, we finally evaluate the sum (46) S N,K (K )K 2 (K )K N jk N jk (j )!(2j K) (j K)! j! (j K)! (K )! N ( ) j 2(K 2)! (K 2)! K jk ( ) ( ) N + N 2(K 2)! (K 2)! K + K (K 2)! ( (N + )! 2 (N K)! (K + )! N! ) K! N! 2N K + (N K)! (K )K(K + ), N jk N jk (j )! (j K)! ( ) j K which is formula (33) Theorem 4 The probability to find two particles in the same cluster is (47) P 2 G, where G is defined in (02) Proof:Using eq (75) and lemma, we can now compute the probability to find two particles in the same cluster (48) Thus, P 2 N(N ) N(N ) N N Π K K j N K j 2 M j N,K N Π K ( N + 2N N K K + ) N (49) P 2 2 N N K Π K N K K + 2 N + 2N + N N K K + Π K

34 34 N HOZE AND D HOLCMAN Following formula (99), the sum in eq (49) can be expressed by integrating the Kummer s function N K + Π K Π z F ( N + ; 2; z)dz (50) 2 z 2 z2a K Integrating the hypergeometric series gives (5) z F ( N + ; 2; z)dz z 2 2F 2 ( N +, 2; 2, 3; z), which leads to (52) N K K + Π K 2F 2 ( N +, 2; 2, 3; 2a) 2 F ( N + ; 2; 2a) F ( N + ; 3; 2a) 2 F ( N + ; 2; 2a) By combining eq (49) and (52) we obtain (53) P 2 2 N + N + F ( N + ; 3; 2a) N F ( N + ; 2; 2a) The three-term recurrence relation for Kummer s function ([] Eqs ) gives F ( N + ; 3; 2a) N N + F ( N + 2; 3; 2a) + 2 N + F ( N + ; 2; 2a) (54) Finally, using eq (02), we obtain (55) P 2 G To finish this section, we note that the large N asymptotic of the probability that two particles are in the same cluster is (56) P 2 2 an Many results presented in this section can be used to study the distribution of clusters in biological systems such as telomere organization in yeast We provided here the explicit derivations of the exact and asymptotic formulas that can be used to analyze experimental and simulation results [4]

35 STOCHASTIC COAGULATION-FRAGMENTATION 35 5 Example 2: the case a i a for i < M and a i 0 if i M In this section, we consider N particles that can associate or dissociate at a constant rate, but in addition they cannot form clusters of more than M particles The configuration space for distributions of N particles in K clusters of size less than M is (57) P N,K,M {(m i ) i M ; M im i N, i M m i K} First, the minimal number of clusters is necessarily bounded by K N/M, since the opposite would imply a cluster of at least M + particles The probability of a configuration (m,, m M ) P N,K,M is equal to (58) P r{(m,, m M ) P N,K,M } i,m m!m M!, where the normalization constant,m is the Nth order coefficient of (59) (ax + ax ax M ) K a K X K ( XM X )K a K X ( X )K a K ( X) K K ( ) K ( ) n X nm n K ( ) K ( ) n X nm+k n Then the Nth order coefficient of the polynomial is obtained by finding the (N nm K)th order coefficient of ( X) K,M a K (60) K n0 n0 n0 ( ) ( K ( ) n n (N (nm + K))! D(N (nm+k)) ( X) K where we write D (n) (f) as the n th order derivative of some function f Thus, setting K 0 N K M, where is the floor function, we have (6),M a K K K 0 n0 (N nm )! n!(k n)!(n (nm + K))! ( )n For M N we find K 0 0 and the normalization constant (62),N a K (N )! (K )!(N K)!, ), X0

36 36 N HOZE AND D HOLCMAN is equal to the normalization constant obtained for the constant kernel in section 4 The mean number of clusters of size i M conditioned on the number of clusters K is M i K m i p(m, m M ) (63) m i P N,K,M a C N i,k,m,m To find the probability to have K clusters, we now redefine the formation rate In section 4, the formation rate was proportional to the number of pairs of particles since all of them could form a new cluster In the present case, two clusters of size i and j can form a new cluster only if i + j M The formation rate when there are K clusters is thus f K (64) (m i ) P N,K,M M/2 p(m,, m N ) i m i (m i ) 2 + M i,j i+j M;i j m i m j The formation rate can be written as a function of the coefficients,m as (65) f 2 C N,2,M, and for K > 2 (66) f K + K(K ) 2 K(K ) 2 min( M 2, N K+2 2 ) i min(m,n K+) i,j i+j M C N 2i,K 2,M C N i j,k 2,M The separation rate remains unchanged s K (N K)a, and the probabilities at steady state are given by (67) Π K f K+ s K Π K+ A simple expression is certainly hopeless, but the limit a 0 is informative: contrary to the previous case with a constant kernel (M N), where the

37 STOCHASTIC COAGULATION-FRAGMENTATION 37 particles form a single cluster of size N, the present system contains multiple steady state distributions The clusters grow independently and reach either their limit size M or are configured such that the sum of the sizes of each pair of cluster is larger than M All possible configurations contain exactly N/M clusters, where is the ceiling function We illustrate the limit case a 0 for N 9, M 4 (Fig 6) Because a > 0, all partitions are accessible, but as a 0, the steady state configurations are dominated by the configurations with the largest possible cluster size (4, 4, ), (4, 3, 2) and (3, 3, 3) Applying formulas (6) and (63), we obtain the limit cluster configuration probabilities (68) p(4, 4, ) 3 0 p(4, 3, 2) 6 0 p(3, 3, 3) 0 These steady state probabilities do not depend on the initial particles configurations as long as a 0 For a 0, there are three possible configurations (4, 4, ), (4, 3, 2) and (3, 3, 3): once equilibrium is attained, the clusters will remain unchanged The probability to get to equilibrium depends on the configuration and the order of clustering events When there is no limitation in the cluster formation (M N 9), a single cluster containing all particles is formed (Fig 6, left panel) For large values of a, most clusters are very small, and the distributions are similar for M 4 and M 9 (Fig 6, right panel) The probability for two particles to be in the same cluster provides a good estimation for the cluster distribution for various values of the parameter a (Fig 7) When a is large, most particles are contained in very small clusters and the probability P 2 is similar for the cases M 4 and M 9 When a 0, particles tend to form larger clusters A single cluster containing all particles is formed and P 2 when M 9, but the maximal value of P 2 is less than when the maximal cluster size is limited We can explicitly compute P 2 in the limit case a 0 For example for M 4, using eq (69), and summing over all possible configurations (68), we obtain P 2 p(4, 4, )P 2 (4, 4, ) + p(4, 3, 2)P 2 (4, 3, 2) + p(3, 3, 3)P 2 (3, 3, 3)

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Applications. More Counting Problems. Complexity of Algorithms

Applications. More Counting Problems. Complexity of Algorithms Recurrences Applications More Counting Problems Complexity of Algorithms Part I Recurrences and Binomial Coefficients Paths in a Triangle P(0, 0) P(1, 0) P(1,1) P(2, 0) P(2,1) P(2, 2) P(3, 0) P(3,1) P(3,

More information

MULTI-RESTRAINED STIRLING NUMBERS

MULTI-RESTRAINED STIRLING NUMBERS MULTI-RESTRAINED STIRLING NUMBERS JI YOUNG CHOI DEPARTMENT OF MATHEMATICS SHIPPENSBURG UNIVERSITY SHIPPENSBURG, PA 17257, U.S.A. Abstract. Given positive integers n, k, and m, the (n, k)-th m- restrained

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Prepared by: M. S. KumarSwamy, TGT(Maths) Page Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Generating Functions

Generating Functions Semester 1, 2004 Generating functions Another means of organising enumeration. Two examples we have seen already. Example 1. Binomial coefficients. Let X = {1, 2,..., n} c k = # k-element subsets of X

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes

PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES. Notes PUTNAM PROBLEMS SEQUENCES, SERIES AND RECURRENCES Notes. x n+ = ax n has the general solution x n = x a n. 2. x n+ = x n + b has the general solution x n = x + (n )b. 3. x n+ = ax n + b (with a ) can be

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MATH Mathematics for Agriculture II

MATH Mathematics for Agriculture II MATH 10240 Mathematics for Agriculture II Academic year 2018 2019 UCD School of Mathematics and Statistics Contents Chapter 1. Linear Algebra 1 1. Introduction to Matrices 1 2. Matrix Multiplication 3

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Counting k-marked Durfee Symbols

Counting k-marked Durfee Symbols Counting k-marked Durfee Symbols Kağan Kurşungöz Department of Mathematics The Pennsylvania State University University Park PA 602 kursun@math.psu.edu Submitted: May 7 200; Accepted: Feb 5 20; Published:

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit

A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship

More information

arxiv: v1 [math.co] 21 Sep 2015

arxiv: v1 [math.co] 21 Sep 2015 Chocolate Numbers arxiv:1509.06093v1 [math.co] 21 Sep 2015 Caleb Ji, Tanya Khovanova, Robin Park, Angela Song September 22, 2015 Abstract In this paper, we consider a game played on a rectangular m n gridded

More information

Econ Slides from Lecture 10

Econ Slides from Lecture 10 Econ 205 Sobel Econ 205 - Slides from Lecture 10 Joel Sobel September 2, 2010 Example Find the tangent plane to {x x 1 x 2 x 2 3 = 6} R3 at x = (2, 5, 2). If you let f (x) = x 1 x 2 x3 2, then this is

More information

Laura Chihara* and Dennis Stanton**

Laura Chihara* and Dennis Stanton** ZEROS OF GENERALIZED KRAWTCHOUK POLYNOMIALS Laura Chihara* and Dennis Stanton** Abstract. The zeros of generalized Krawtchouk polynomials are studied. Some interlacing theorems for the zeros are given.

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdlhandlenet/1887/39637 holds various files of this Leiden University dissertation Author: Smit, Laurens Title: Steady-state analysis of large scale systems : the successive

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

Expected Number of Distinct Subsequences in Randomly Generated Binary Strings

Expected Number of Distinct Subsequences in Randomly Generated Binary Strings Expected Number of Distinct Subsequences in Randomly Generated Binary Strings arxiv:704.0866v [math.co] 4 Mar 08 Yonah Biers-Ariel, Anant Godbole, Elizabeth Kelley March 6, 08 Abstract When considering

More information

General Power Series

General Power Series General Power Series James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University March 29, 2018 Outline Power Series Consequences With all these preliminaries

More information

A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS

A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS Applicable Analysis and Discrete Mathematics, 1 (27, 371 385. Available electronically at http://pefmath.etf.bg.ac.yu A LINEAR BINOMIAL RECURRENCE AND THE BELL NUMBERS AND POLYNOMIALS H. W. Gould, Jocelyn

More information

Investigation of Theoretical Models of the Behavior of Animal Populations. Or Zuk

Investigation of Theoretical Models of the Behavior of Animal Populations. Or Zuk Investigation of Theoretical Models of the Behavior of Animal Populations Or Zuk Investigation of Theoretical Models of the Behavior of Animal Populations Research Thesis Submitted in partial fulfillment

More information

Computational Aspects of Aggregation in Biological Systems

Computational Aspects of Aggregation in Biological Systems Computational Aspects of Aggregation in Biological Systems Vladik Kreinovich and Max Shpak University of Texas at El Paso, El Paso, TX 79968, USA vladik@utep.edu, mshpak@utep.edu Summary. Many biologically

More information

A class of trees and its Wiener index.

A class of trees and its Wiener index. A class of trees and its Wiener index. Stephan G. Wagner Department of Mathematics Graz University of Technology Steyrergasse 3, A-81 Graz, Austria wagner@finanz.math.tu-graz.ac.at Abstract In this paper,

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

1 Homework. Recommended Reading:

1 Homework. Recommended Reading: Analysis MT43C Notes/Problems/Homework Recommended Reading: R. G. Bartle, D. R. Sherbert Introduction to real analysis, principal reference M. Spivak Calculus W. Rudin Principles of mathematical analysis

More information

Markov Chains Introduction

Markov Chains Introduction Markov Chains 4 4.1. Introduction In this chapter, we consider a stochastic process {X n,n= 0, 1, 2,...} that takes on a finite or countable number of possible values. Unless otherwise mentioned, this

More information

Series of Error Terms for Rational Approximations of Irrational Numbers

Series of Error Terms for Rational Approximations of Irrational Numbers 2 3 47 6 23 Journal of Integer Sequences, Vol. 4 20, Article..4 Series of Error Terms for Rational Approximations of Irrational Numbers Carsten Elsner Fachhochschule für die Wirtschaft Hannover Freundallee

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Binomial coefficients and k-regular sequences

Binomial coefficients and k-regular sequences Binomial coefficients and k-regular sequences Eric Rowland Hofstra University New York Combinatorics Seminar CUNY Graduate Center, 2017 12 22 Eric Rowland Binomial coefficients and k-regular sequences

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES. Louis A. Levy

MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES. Louis A. Levy International Electronic Journal of Algebra Volume 1 (01 75-88 MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES Louis A. Levy Received: 1 November

More information

2 One-dimensional models in discrete time

2 One-dimensional models in discrete time 2 One-dimensional models in discrete time So far, we have assumed that demographic events happen continuously over time and can thus be written as rates. For many biological species with overlapping generations

More information

Week 9-10: Recurrence Relations and Generating Functions

Week 9-10: Recurrence Relations and Generating Functions Week 9-10: Recurrence Relations and Generating Functions April 3, 2017 1 Some number sequences An infinite sequence (or just a sequence for short is an ordered array a 0, a 1, a 2,..., a n,... of countably

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS

COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS COMPOSITIONS WITH A FIXED NUMBER OF INVERSIONS A. KNOPFMACHER, M. E. MAYS, AND S. WAGNER Abstract. A composition of the positive integer n is a representation of n as an ordered sum of positive integers

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Alternative Characterization of Ergodicity for Doubly Stochastic Chains Alternative Characterization of Ergodicity for Doubly Stochastic Chains Behrouz Touri and Angelia Nedić Abstract In this paper we discuss the ergodicity of stochastic and doubly stochastic chains. We define

More information

Operations On Networks Of Discrete And Generalized Conductors

Operations On Networks Of Discrete And Generalized Conductors Operations On Networks Of Discrete And Generalized Conductors Kevin Rosema e-mail: bigyouth@hardy.u.washington.edu 8/18/92 1 Introduction The most basic unit of transaction will be the discrete conductor.

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

RUSSELL JAY HENDEL. F i+1.

RUSSELL JAY HENDEL. F i+1. RUSSELL JAY HENDEL Abstract. Kimberling defines the function κ(n) = n 2 α n nα, and presents conjectures and open problems. We present three main theorems. The theorems provide quick, effectively computable,

More information

Series Solutions. 8.1 Taylor Polynomials

Series Solutions. 8.1 Taylor Polynomials 8 Series Solutions 8.1 Taylor Polynomials Polynomial functions, as we have seen, are well behaved. They are continuous everywhere, and have continuous derivatives of all orders everywhere. It also turns

More information

A Recursive Relation for Weighted Motzkin Sequences

A Recursive Relation for Weighted Motzkin Sequences 1 3 47 6 3 11 Journal of Integer Sequences, Vol. 8 (005), Article 05.1.6 A Recursive Relation for Weighted Motzkin Sequences Wen-jin Woan Department of Mathematics Howard University Washington, D.C. 0059

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Gibbs distributions for random partitions generated by a fragmentation process

Gibbs distributions for random partitions generated by a fragmentation process Gibbs distributions for random partitions generated by a fragmentation process Nathanaël Berestycki and Jim Pitman University of British Columbia, and Department of Statistics, U.C. Berkeley July 24, 2006

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Increments of Random Partitions

Increments of Random Partitions Increments of Random Partitions Şerban Nacu January 2, 2004 Abstract For any partition of {1, 2,...,n} we define its increments X i, 1 i n by X i =1ifi is the smallest element in the partition block that

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

Data analysis and stochastic modeling

Data analysis and stochastic modeling Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

THE STRUCTURE OF A RING OF FORMAL SERIES AMS Subject Classification : 13J05, 13J10.

THE STRUCTURE OF A RING OF FORMAL SERIES AMS Subject Classification : 13J05, 13J10. THE STRUCTURE OF A RING OF FORMAL SERIES GHIOCEL GROZA 1, AZEEM HAIDER 2 AND S. M. ALI KHAN 3 If K is a field, by means of a sequence S of elements of K is defined a K-algebra K S [[X]] of formal series

More information

Journal of Number Theory

Journal of Number Theory Journal of Number Theory 132 2012) 324 331 Contents lists available at SciVerse ScienceDirect Journal of Number Theory www.elsevier.com/locate/jnt Digit sums of binomial sums Arnold Knopfmacher a,florianluca

More information

MATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION.

MATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION. MATHEMATICS IMPORTANT FORMULAE AND CONCEPTS for Final Revision CLASS XII 2016 17 CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION Prepared by M. S. KUMARSWAMY, TGT(MATHS) M. Sc. Gold Medallist (Elect.),

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

EXERCISES. a b = a + b l aq b = ab - (a + b) + 2. a b = a + b + 1 n0i) = oii + ii + fi. A. Examples of Rings. C. Ring of 2 x 2 Matrices

EXERCISES. a b = a + b l aq b = ab - (a + b) + 2. a b = a + b + 1 n0i) = oii + ii + fi. A. Examples of Rings. C. Ring of 2 x 2 Matrices / rings definitions and elementary properties 171 EXERCISES A. Examples of Rings In each of the following, a set A with operations of addition and multiplication is given. Prove that A satisfies all the

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology

TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology TOPOLOGICAL COMPLEXITY OF 2-TORSION LENS SPACES AND ku-(co)homology DONALD M. DAVIS Abstract. We use ku-cohomology to determine lower bounds for the topological complexity of mod-2 e lens spaces. In the

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st Primes, Paul-Olivier Dehaye pdehaye@math.ethz.ch ETH Zürich, October 31 st Outline Review of Bump & Gamburd s method A theorem of Moments of derivatives of characteristic polynomials Hypergeometric functions

More information

REVERSIBLE MARKOV STRUCTURES ON DIVISIBLE SET PAR- TITIONS

REVERSIBLE MARKOV STRUCTURES ON DIVISIBLE SET PAR- TITIONS Applied Probability Trust (29 September 2014) REVERSIBLE MARKOV STRUCTURES ON DIVISIBLE SET PAR- TITIONS HARRY CRANE, Rutgers University PETER MCCULLAGH, University of Chicago Abstract We study k-divisible

More information

A diametric theorem for edges. R. Ahlswede and L.H. Khachatrian

A diametric theorem for edges. R. Ahlswede and L.H. Khachatrian A diametric theorem for edges R. Ahlswede and L.H. Khachatrian 1 1 Introduction Whereas there are vertex and edge isoperimetric theorems it went unsaid that diametric theorems are vertex diametric theorems.

More information

Ancestor Problem for Branching Trees

Ancestor Problem for Branching Trees Mathematics Newsletter: Special Issue Commemorating ICM in India Vol. 9, Sp. No., August, pp. Ancestor Problem for Branching Trees K. B. Athreya Abstract Let T be a branching tree generated by a probability

More information

Isomorphism of free G-subflows (preliminary draft)

Isomorphism of free G-subflows (preliminary draft) Isomorphism of free G-subflows (preliminary draft) John D. Clemens August 3, 21 Abstract We show that for G a countable group which is not locally finite, the isomorphism of free G-subflows is bi-reducible

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Permutations with Inversions

Permutations with Inversions 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 4 2001, Article 01.2.4 Permutations with Inversions Barbara H. Margolius Cleveland State University Cleveland, Ohio 44115 Email address: b.margolius@csuohio.edu

More information

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1 DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

More information

Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

More information

Partition of Integers into Distinct Summands with Upper Bounds. Partition of Integers into Even Summands. An Example

Partition of Integers into Distinct Summands with Upper Bounds. Partition of Integers into Even Summands. An Example Partition of Integers into Even Summands We ask for the number of partitions of m Z + into positive even integers The desired number is the coefficient of x m in + x + x 4 + ) + x 4 + x 8 + ) + x 6 + x

More information

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)

Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

NOTES ON LINEAR ALGEBRA. 1. Determinants

NOTES ON LINEAR ALGEBRA. 1. Determinants NOTES ON LINEAR ALGEBRA 1 Determinants In this section, we study determinants of matrices We study their properties, methods of computation and some applications We are familiar with the following formulas

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Exact results for deterministic cellular automata traffic models

Exact results for deterministic cellular automata traffic models June 28, 2017 arxiv:comp-gas/9902001v2 2 Nov 1999 Exact results for deterministic cellular automata traffic models Henryk Fukś The Fields Institute for Research in Mathematical Sciences Toronto, Ontario

More information