COALESCENT RANDOM WALKS ON GRAPHS

Similar documents
THE GENERALIZED PASCAL MATRIX VIA THE GENERALIZED FIBONACCI MATRIX AND THE GENERALIZED PELL MATRIX

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

INDEPENDENT SETS IN GRAPHS WITH GIVEN MINIMUM DEGREE

Notes for Lecture 17-18

An Introduction to Malliavin calculus and its applications

10. State Space Methods

Math 334 Fall 2011 Homework 11 Solutions

Finish reading Chapter 2 of Spivak, rereading earlier sections as necessary. handout and fill in some missing details!

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

Cash Flow Valuation Mode Lin Discrete Time

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

LECTURE 1: GENERALIZED RAY KNIGHT THEOREM FOR FINITE MARKOV CHAINS

Product Integration. Richard D. Gill. Mathematical Institute, University of Utrecht, Netherlands EURANDOM, Eindhoven, Netherlands August 9, 2001

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Bernoulli numbers. Francesco Chiatti, Matteo Pintonello. December 5, 2016

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

5.1 - Logarithms and Their Properties

4.1 - Logarithms and Their Properties

Vehicle Arrival Models : Headway

SOLUTIONS TO ECE 3084

Some Ramsey results for the n-cube

Undetermined coefficients for local fractional differential equations

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

Chapter 2. First Order Scalar Equations

Empirical Process Theory

Weyl sequences: Asymptotic distributions of the partition lengths

non -negative cone Population dynamics motivates the study of linear models whose coefficient matrices are non-negative or positive.

GENERALIZATION OF THE FORMULA OF FAA DI BRUNO FOR A COMPOSITE FUNCTION WITH A VECTOR ARGUMENT

20. Applications of the Genetic-Drift Model

Approximation Algorithms for Unique Games via Orthogonal Separators

KINEMATICS IN ONE DIMENSION

arxiv: v1 [math.pr] 19 Feb 2011

!!"#"$%&#'()!"#&'(*%)+,&',-)./0)1-*23)

5. Stochastic processes (1)

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

The Zarankiewicz problem in 3-partite graphs

Stochastic models and their distributions

ES.1803 Topic 22 Notes Jeremy Orloff

Chapter 3 Boundary Value Problem

Some Basic Information about M-S-D Systems

Echocardiography Project and Finite Fourier Series

Representation of Stochastic Process by Means of Stochastic Integrals

International Journal of Scientific & Engineering Research, Volume 4, Issue 10, October ISSN

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

Mixing times and hitting times: lecture notes

The Strong Law of Large Numbers

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Let us start with a two dimensional case. We consider a vector ( x,

Inequality measures for intersecting Lorenz curves: an alternative weak ordering

SPECTRAL EVOLUTION OF A ONE PARAMETER EXTENSION OF A REAL SYMMETRIC TOEPLITZ MATRIX* William F. Trench. SIAM J. Matrix Anal. Appl. 11 (1990),

THE MATRIX-TREE THEOREM

Homogenization of random Hamilton Jacobi Bellman Equations

Differential Harnack Estimates for Parabolic Equations

Math 10B: Mock Mid II. April 13, 2016

Solutions from Chapter 9.1 and 9.2

Matlab and Python programming: how to get started

arxiv:math/ v1 [math.nt] 3 Nov 2005

1 Solutions to selected problems

Variational Iteration Method for Solving System of Fractional Order Ordinary Differential Equations

Homework sheet Exercises done during the lecture of March 12, 2014

Families with no matchings of size s

Lecture 10: The Poincaré Inequality in Euclidean space

A NOTE ON S(t) AND THE ZEROS OF THE RIEMANN ZETA-FUNCTION

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

The Arcsine Distribution

Optimal Server Assignment in Multi-Server

A problem related to Bárány Grünbaum conjecture

Robust estimation based on the first- and third-moment restrictions of the power transformation model

The Asymptotic Behavior of Nonoscillatory Solutions of Some Nonlinear Dynamic Equations on Time Scales

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws

2 Some Property of Exponential Map of Matrix

An random variable is a quantity that assumes different values with certain probabilities.

arxiv: v1 [math.fa] 9 Dec 2018

Kinematics Vocabulary. Kinematics and One Dimensional Motion. Position. Coordinate System in One Dimension. Kinema means movement 8.

On Boundedness of Q-Learning Iterates for Stochastic Shortest Path Problems

EXERCISES FOR SECTION 1.5

Math 333 Problem Set #2 Solution 14 February 2003

MODULE 3 FUNCTION OF A RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES PROBABILITY DISTRIBUTION OF A FUNCTION OF A RANDOM VARIABLE

6. Stochastic calculus with jump processes

A combinatorial trace formula

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

Notes on Kalman Filtering

IB Physics Kinematics Worksheet

Linear Response Theory: The connection between QFT and experiments

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

A generalization of the Burg s algorithm to periodically correlated time series

GMM - Generalized Method of Moments

Lecture 20: Riccati Equations and Least Squares Feedback Control

Average Number of Lattice Points in a Disk

Ordinary Differential Equations

1 1 + x 2 dx. tan 1 (2) = ] ] x 3. Solution: Recall that the given integral is improper because. x 3. 1 x 3. dx = lim dx.

Ground Rules. PC1221 Fundamentals of Physics I. Kinematics. Position. Lectures 3 and 4 Motion in One Dimension. A/Prof Tay Seng Chuan

t 2 B F x,t n dsdt t u x,t dxdt

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Transcription:

COALESCENT RANDOM WALKS ON GRAPHS JIANJUN PAUL TIAN AND ZHENQIU LIU Absrac. Inspired by coalescen heory in biology, we inroduce a sochasic model called muli-person simple random walks or coalescen random walks on a graph G. There are any finie number of persons disribued randomly a he verices of G. In each sep of his discree ime Markov chain, we randomly pick up a person and move i o a random adjacen verex. To sudy his model, we inroduce he ensor powers of graphs and he ensor producs of Markov processes, Then he coalescen random walk on graph G becomes he simple random walk on a ensor power of G. We give esimaes of expeced number of seps for hese persons o mee all ogeher a a specific verex. For regular graphs, our esimaes are exac. 1. Inroducion Inspired by coalescen heory in populaion geneics, we consider, in he presen paper, a class of models, called coalescen random walks on graphs which is acually an generalizaion of coalescen heory. Le us recall he basic idea abou coalescen heory firsly. Taking a sample wih n individuals from a populaion, we label hem as 1, 2,, n, and ask a quesion how long ago he recen common ancesor of he sample lived. Coalescen heory answers his quesion by running a coninuous ime Markov chain over he collecion of pariions A 1, A 2,, A of 1, 2,, n, where each A i consiss of one subse of individuals ha have coalesced and hence are idenical by descen. To explain his heory, we look a an example ha he sample consiss of five individuals 1, 2,, 5. For he purpose of illusraion, we randomly choose pariions a each ime when a coalescen even happens. As we work backwards in ime, pariions chosen as he following, ime 0 {1} {2} {3} {4} {5} T 4 {1,2} {3} {4} {5} T 3 {1,2} {3} {4,5} T 2 {1,2,3} {4,5} T 1 {1,2,3,4,5} Iniially, he pariion consiss of five singleons since here has been no coalescence. Afer 1 and 2 coalesce a ime T 4, hey appear in he same se. Then 4 and 5 coalesce a ime T 3, ec. Finally, a ime T 1 we end up wih all he labels in one se. Afer figured ou he probabiliies of coalescen evens, Kingman [2, 3] go a coninuous ime Markov chain asympoically. Tha is coalescen heory. If we consruc a graph by aking he se of verices as he se of all pariions of labeled 1

2 JIANJUN PAUL TIAN AND ZHENQIU LIU individuals of a sample and he se of edges as he se of coalescen relaions, we could see ha coalescen heory is a class of coninuous ime Markov chains over a special class of graphs. The graphs are special, because hey have a kind of parial direcions from n disinc verices o he oher one verex which represen he genealogical relaions. From a differen viewpoin, coalescen heory models how n paricles come ogeher under cerain condiions. Therefore, we generally consider he following model ha we call k-coalescen random walks or muli-person simple random walks. Given a graph G wih n verices and m edges, and suppose ha k persons disribue on n verices of G. We here allow ha several persons can sand ogeher on one verex and k can be bigger han n or smaller han n. A each ime sep, one person could randomly move o one of his neighbor verices wih he equal probabiliy of moving any one of his neighbor verices excep he verex he currenly sands on. Then, here arises an ineresing quesion ha when hese k person will firs ime mee ogeher on a specific verex. To solve his problem, in he res of he aricle, we generalize he concep of he ensor powers of a graph, which are inroduced in paper [1]. We recall he ensor producs of Markov processes. By using he coninuous ime Markov chains over he k-h ensor powers of he given graph, we urn he k-coalescen random walks on he ground graph G ino he simple random walks on is k-h ensor powers. This way, we ge an esimaion of he expecaion of he ime seps ha k persons saring wih any disribuion on he graph come ogeher a a specific verex. For simpliciy, we only consider conneced simple graphs. These are conneced graphs wihou muliple edges and loops. We adop he following noaions and erminologies for a graph G. The ses of verices and edges of G are V (G) and E(G), respecively. The order n of G is he number of verices of G, and he size m of G is he number of edges of G. Thus, n = V (G) and m = E(G). For a verex x V (G), Γ(x) is he se of verices which are conneced o x by an edge in E(G). The degree of a verex x is d(x) = Γ(x). We have d(x) = 2m. x V (G) The adjacen marix of G is denoe by A(G) and he diagonal marix D(G) has he sequence of degrees a each verex as is diagonal enries. Finally, we denoe d m = min{d(x); x V (G)} and d M = max{d(x); x V (G)}. We would like o refer he reader o [8, 4, 1, 6] for basic noions and resuls in he sudy of simple random walks on graphs.

COALESCENT RANDOM WALKS ON GRAPHS 3 2. Coalescen random walks and simple random walks on graphs 2.1. The ensor powers of graphs. For a graph G wih order n and size m, a n-h ensor power of G was inroduced in paper [1]. I is no necessary o consrain he order of ensor power o be he order of he graph. Alhough he moivaion of our generalizaion of ensor powers of a graph is coalescen random walks of any number of persons on he graph, he general ensor powers have heir own ineresing and applicaions. Le I k and I n be finie ses of cardinaliies k and n respecively. For example, we may have I k = {1, 2,..., k}, and I n = {1, 2,..., n}. We denoe he se of all maps from I k o I n by M k,n. When k = n, we have he symmeric group S n siing inside of M n,n. A map x M k,n is called a generalized permuaion of deficiency j if x(i k ) = k j. We denoe def(x) = j. M k,n is a semigroup under composiion when k n. The symmerical group S n is a subgroup of M k,n only when k = n. The deficiency deermines a grading on M k,n which is compaible wih he semigroup produc on M k,n : def(x) + def(y) 2def(x y). Denoe M (j) k,n = {all maps wih deficiency j}, hen we have a decomposiion of M k,n according o he deficiency: M k,n = M (k n) k,n M (k n+1) k,n M (k n+2) k,n M (k 1) k,n, where M (k n) k,n is he se of all surjecive maps. M k,n has a similar decomposiion when k n. Tha is, M k,n = M (0) k,n M (1) k,n M (2) k,n M (k 1) k,n, where M (0) k,n is he se of all injecive maps. We idenify he se of verices V (G) of graph G wih I n, and we call G he ground graph. Le he adjacen marix A(G) = (a ij ) n n wih enries given by { 1 if ij E(G), a ij = 0 oherwise. For any posiive ineger k, we define a new graph M k,n (G) which we call he k-h ensor power of G as follows. V (M k,n (G)) = M k,n, ha is, he se of verices of M k,n (G) is he se of all maps from I k o I n ; For x, y M k,n, here is an edge xy in M k,n (G) only when {i : x(i) y(i)} = 1, and if x(i) y(i), hen A xy = a x(i)y(i) = 1, where A xy s are he enries of he adjacen marix of M k,n (G).

4 JIANJUN PAUL TIAN AND ZHENQIU LIU I is easy o see ha he order of M k,n (G) is n k. However, he degrees of verices and size of M k,n (G) are no easy o compue direcly. They will become corollaries afer we connec he coalescen random walks on G and he simple random walks on M k,n (G). We here give an example o see how o consruc a ensor power of a ground graph. The ground graph is a riangle G, a regular graph wih degree 2. We denoe is verices by numbers 1, 2 and 3. 2 1 3 Figure 1. The ground graph G. We look a he 3-h ensor power which has order 27 and size 81. Noice ha he number sequence on a verex represens he map from I 3 o I 3. For insan, 232 represens he map ha send 1 o 2, 2 o 3 and 3 o 2. 111 121 131 211 221 231 311 321 331 112 122 132 212 222 232 312 322 332 113 123 133 213 223 233 313 323 333 Figure 2. The 3-h ensor power M 3,3 (G) of G. Biparie propery is imporan for considering random walks on graphs. We herefore generalize a lemma in paper [1]. Is proof seems similar. For convenience o read, we also presen he proof here. Theorem 2.1. G is biparie if and only if M k,n (G) is biparie. Proof. We use he classical resul of König ha a graph is biparie if and only if all is cycles are even. If G is no biparie, hen here is a cycle C = v 1 v 2 v m v 1 of odd lengh in G. We look a he cycle in M k,n (G) given by (v 1, v 1,, v 1 )(v 1,, v 1, v 2 )(v 1,, v 1, v 3 ) (v 1,, v 1, v m )(v 1,, v 1, v 1 ).

COALESCENT RANDOM WALKS ON GRAPHS 5 Is lengh is also odd. Thus M k,n (G) is no biparie. If G is biparie, hen he se of verices V = V (G) can be wrien as V 1 V 2, wih V 1 V 2 = and here is no edge beween verices boh in V 1 or boh in V 2. We ry o bipar he se of verices of M k,n (G). We know V (M k,n (G)) = V k. We wrie V = V 1 + V 2. Then V k = (V 1 V 2 ) k = V k 1 V k 1 1 V 2 V k 2 1 V 2 2 V 1 V k 1 2 V k 2, where V k r 1 V r 2 means we ake (k r) verices from V 1 and r verices from V 2, regardless of order, o form a verex of M k,n (G). Le V 1 = V1 k V1 k 2 V2 2 V1 k 4 V2 4, V 2 = V1 k 1 V 2 V1 k 3 V2 3 V1 5 V2 k 5. Then V 1 V 2 = V (M k,n (G)) and V 1 V 2 =. By he definiion of M k,n (G), we can no find an edge beween any wo verices which are boh in V 1 or boh in V 2. Remark 2.1. We could define he ensor producs of any finie number of differen graphs by a moivaion from he ensor producs of Markov processes in he nex subsecion. However, we leave i as a furher opic. 2.2. The ensor producs of Markov chains. The ensor producs could be defined for any sochasic process in generally. For our presen purpose, we jus define he ensor producs for Markov chains including coninuous ime and discree ime cases. Le X (1), X (2),..., X (k) be Markov processes (Coninuous ime Markov chains wih homogeneous ransiion probabiliies here) on he sae spaces S (1), S (2),..., S (k) respecively. We define a new process Y on he sae space S (1) S (2) S (k) wih he ransiion probabiliy given by Pr{Y +h = (s 2 ) Y h = (s 1 )} = Pr{Y (i) +h = s(i) 2 Y (i) h = s (k) 1 } = p (i) (), where (s j ) = (s (1) j, s (2) j,..., s (k) j ), j = 1, 2, and p (i) () is he ransiion probabiliy funcion of he Markov process X (i), i = 1, 2,..., k. We call Y he ensor produc of Markov processes X (i), i = 1, 2,, k. The nex wo are basic saemens for he ensor producs of Markov processes. We herefore give heir proofs. Lemma 2.1. Y is a coninuous ime Markov chain.

6 JIANJUN PAUL TIAN AND ZHENQIU LIU Proof. (2) (1) By he definiion of he ensor produc Y, is ransiion probabiliy funcion P (s1 )(s 2 )() = () 0, p (i) since each erm of he produc is nonnegaive. (s 2 ) P (s1 )(s 2 )() = (s 2 ) p (i) () = since each summaion is one. (3) The Chapman-Kolmogorov equaions, (i) p 2 S(i) () = 1, P (s1 )(s 2 )(h + ) = = p (i) (h + ) (i) p 1 s(i) 3 3 S(i) (h)p (i) 3 s(i) 2 = (s 3 ) P (s1 )(s 3 )(h)p (s3 )(s 2 )(), () are saisfied. (4) We have he regulariy, lim P 0 + (s 1 )(s 2 )() = lim 0 + p (i) () = { 1 if (s 1 ) = (s 2 ), 0 oherwise, when (s 1 ) = (s 2 ), each erm of he produc goes o one; when (s 1 ) (s 2 ), here a leas is a erm so ha 1 2, hen p (r) goes o zero. 1 s(r) 2 Therefore, he definiion of ensor producs of Markov processes is well-defined. Theorem 2.2. Le q (s1 )(s 2 ) be he infiniesimal generaor of he coninuous ime Markov chain Y. Then q (i) if only one index i such ha 1 s(i) 1 2, q (s1 )(s 2 ) = 2 k q(i) if (s 1 ) = (s 2 ), namely, 1 s(i) 1 = 2 for all i, 2 0 oherwise. where q (i) is he infiniesimal generaor of Markov process X (i), i = 1, 2,..., k.

Proof. COALESCENT RANDOM WALKS ON GRAPHS 7 (1) If here exiss only one index r such ha 1 lim 0 + k p(i) = lim 0 + =,i r () k,i r p(i) p (i) = 1 q (r) 1 s(r) 2 (2) If (s 1 ) = (s 2 ), namely, 1 = lim 0 + k p(i) 1 s(i) 1 () 1 (1) [ps 1 s 1 () 1] k = lim 0 + = q (1) + lim s (1) 1 s(1) 1 = q (1) s (1) 1 s(1) 1 k = q (i) (2) [p s (2) 0 + 1 s(2) 1 i=2 p(i) 1 s(i) 1 (0) lim = q (r) () p (r) (r) p 1 s(r) 2 0 + 1 s(r) 2 ; 2 for all i, hen () 1] k + q (2) + + lim s (2) 1 s(2) 1 1 s(i) 1 ; (k) p s (k) 0 + () + k i=3 p(i) 1 s(i) 1 1 s(k) 1 1 s(r) 2 () i=2 p(i) 1 s(i) 1 () 1 () () + k () 1 2, hen we have i=3 p(i) 1 s(i) 1 () 1 (3) Excep he cases of he above, here are a leas wo indices, say j and r, such ha s (j) 1 s (j) 2 and 1 2, lim 0 + k p(i) = = 0. i r,i j p (i) () (0) p (r) 1 s(r) 2 (j) p s (j) 1 s(j) 2 0 + (0) lim Thus, we ge he infiniesimal generaor for he Markov process Y. () Now, if we order he elemens of he sae space S (1) S (2) S (n) lexicographically, and le P (i) () be he probabiliy ransiion marix of X (i), i = 1, 2,, k,

8 JIANJUN PAUL TIAN AND ZHENQIU LIU i is easy o see ha he probabiliy ransiion marix P () of Y is given by he ensor produc P (1) () P (2) () P (k) (). This is why we call Y is a ensor produc of Markov process X (i), i = 1, 2,..., k. By he heorem 2.2 above, we also can see ha he infiniesimal generaor marix of Y is given by Q (1) I I + I Q (2) I I + + I I Q (k) where Q (i) is he infiniesimal generaor marix of X (i) convenience, we denoe Y = X (1) X (2) ineresing saemen by using hese noaions. Corollary 2.1. P () = P (1) () P (2) () P (k) () X (k), i = 1, 2,... k. Thus, for. We also have he following = exp((q (1) I I + I Q (2) I I + + I I Q (k) )). Proof. For simpliciy, we ake facor number k o be 2 and denoe Q (1) as Q, and Q (2) as Q. We direcly compue he resuls o verify our concepual derivaive. P () = P (1) () P (2) () = exp(q) exp(q) = (I + Q + 1 2! Q2 2 + ) (I + Q + 1 2! Q2 2 + ) = I I + (Q I + I Q) + ( 1 2 Q2 I + 1 2 I Q2 + Q Q) 2 + + ( 1 3! Q3 I + 1 3! I Q3 + 1 2! Q Q2 + 1 2! Q2 Q) 3 + = I + (Q I + I Q) + 1 2! (Q I + I Q)2 2 + 1 3! (Q I + I Q)3 3 + = exp((q I + I Q)) Noice ha if X (1) and X (2) have differen numbers of saes, he ideniy marices in he above expressions have differen sizes. 2.3. Coninuous ime simple random walks on M k,n (G). In he paper [1], we inroduced coninuous ime simple random walks (CTSRW) on graphs. For a given graph G, he CTSRW on G is defined by giving is infiniesimal generaor as he negaive Laplacian of G. We know ha he jump chains of he CTSRWs on a graph are he simple random walks (SRW) on his graph. We could herefore esimae some ineresing quaniies abou SRW by sudy CTSRW. Surprisingly, by using he ensor powers of a graph, he k-coalescen random walk on he graph G urns ou o be he jump chain of he CTSRW on he k-h ensor power of G.

COALESCENT RANDOM WALKS ON GRAPHS 9 Le s now denoe Markov process CTSRW on M k,n (G) by Y, and Markov process CTSRW on G by X. We have he following basic saemen. Theorem 2.3. The Markov process Y on he graph M k,n (G) is he k-h ensor power of he Markov process X on he ground graph G. The jump chain of Y is k-coalescen random walk of G. Proof. Le s recall Q = L(G) = D(G) + A(G) = (q ij ) n n, 1 if ij E(G), q ij = d(i) if i = j, 0 oherwise, where d(i) is he degree of he verex i. Le X X X = Z be he k- h ensor power of he process X. Take wo saes for Z, (s 1 ) and (s 2 ). Then (s i ) acually is a sequence of verices of G. We wrie (s 1 ) = (i 1, i 2,..., i k ) and (s 2 ) = (j 1, j 2,..., j k ). By he heorem 2.2 above, we have q (r) if only one index r such ha p 1 s(r) 1 2, (s 1 )(s 2 )(0) = 2 k r=1 q(r) if (s 1 ) = (s 2 ), namely, 1 s(r) 1 = 2 for all r, 2 0 oherwise, 1 if only one index r such ha i r j r, i r j r E(G), = k r=1 d(i r) if i r = j r for all r, 0 oherwise. Now, we could idenify (s 1 ) and (s 2 ) wih he images of cerain maps x and y, respecively. Then by he definiion of M k,n (G), xy E(M k,n (G)), if and only if p (s 1 )(s 2 ) (0) = 1. As o he degree of x, we know ha he neighbors of x = (s 1) can only be (r 1, i 2,..., i k ), where r 1 mus be a neighbor of i 1 in he ground graph G; (i 1, r 2,..., i k ), where r 2 mus be a neighbor of i 2 in he ground graph G; and so on, up o he las one (i 1, i 2,..., i k 1, r k ), where r k mus be a neighbor of i k in he ground graph G. Thus k d(x) = d(s 1 ) = d(i j ). This agrees wih p (s 1 )(s 1 ) (0) = k r=1 d(i r). So, Y = X X X = Z. The second conclusion is easy o see. Because of his heorem, we call graph M k,n (G) he k-h ensor power of he graph G. We also noe ha he ransiion probabiliy from x o y for Y is given by P xy () = p x(i)y(i) (). j=1

10 JIANJUN PAUL TIAN AND ZHENQIU LIU From he proof of Theorem 2.3, we know ha for x M k,n = V (M k,n (G)), is degree in M k,n (G) is given by k d(x) = d(x(r)). We can compue he size of M k,n (G), i. e. he number of edges of M k,n (G) as 2 E(M k,n (G)) = k k d(x) = d(x(r)) = d(x(r)) x M k,n x M k,n r=1 r=1 x M k,n k = ((d(1) + d(2) + + d(k))n k 1 ) r=1 = 2mkn k 1. r=1 Thus, he size of M k,n (G) is mkn k 1. I is clear ha he order of M k,n (G) is n k. Therefore, he frequency of M k,n (G) is 2mk. The definiion of he frequency is in n paper [1]. We wrie hese resuls as a corollary. Corollary 2.2. The k-h ensor power of graph G wih order n and size m has order and size respecively V (M k,n (G)) = n k, E(M k,n (G)) = mkn k 1. If G is no biparie wih order n and size m, hen M k,n (G) is no biparie. We could have he saionary disribuion for he k-coalescen random walk on G. In erms of he SRW on graph M k,n (G), for each verex x in M k,n, he x-componen of he saionary disribuion is given by k d(x(i)) k k where π x = d(x) 2mkn k 1 = 2mkn k 1 = k 1 n 1 k d(x(i)) 2m = k 1 n 1 k π G x(i) = d(x(i)) 2m is he x(i)-componen of he saionary disribuion vecor for SRW on G. π G x(i), For a given map in M k,n, he expeced firs reurn ime for CTSRW on M k,n (G) is given by h(x, x) = nk d(x) = n k k d(x(i)). In paricular, for he ideniy map Id, d(id) = d 1 + d 2 + + d n = 2m. So h(id, Id) = nk 2m.

COALESCENT RANDOM WALKS ON GRAPHS 11 For he jump chain SRW on M k,n (G), or k-coalescen random walk on G, we have H(x, x) = f Mk,n (G)h(x, x) = 2mknk 1 k d(x(i)), where f Mk,n (G) = 2mk n is he frequency of M k,n (G). In paricular, H (Id, Id) = n k. Remark 2.2. When we alk abou ideniy map above we require ha he number of walkers is he same as he number of verices of he graph. 2.4. Coalescen imes for k-coalescen random walk on G. For CTSRW on M k,n (G), we are ineresed in calculaing h(x, c i ), he hiing ime from x o c i, where x represens he iniial configuraion of hese k persons and c i is he consan map o a verex i of G. Once we know h(x, c i ), we can use i o esimae H(x, c i ) for SRW on M k,n (G), or equivalenly, he expec number of seps k-coalescen random walk on G should ake o have all persons firs mee a he verex i. Tha is he mean coalescen imes for k-coalescen random walk on G. To compue his quaniies, we need recall cerain formula in paper [1]. The mean hiing ime for CTSRW on graph G wih order n and size m from verex i o j is given by he following inegral h(i, j) = n 0 (p jj () p ij ())d. For coninuous ime Markov chains, paricularly CTSRW on graphs, he imesep inequaliy saes a relaion beween a Markov ime T and seps N T of he jump chains: d m E(T ) E(N T ) d M E(T ). Now, if he minimum and maximum degree of graph G are respecively d m and d M, hen he minimum and maximum degree of graph M k,n (G) are respecively kd m and kd M. Therefore, we have kd m h(x, c i ) H(x, c i ) kd M h(x, c i ). Noice ha when he graph G is regular, H(x, c i ) is deermined compleely by h(x, c i ): H(x, c i ) = kd h(x, c i ).

12 JIANJUN PAUL TIAN AND ZHENQIU LIU Le x and y be wo disinc verices in M k,n (G), we consider he hiing ime h(x, y) of y from x in CTSRW. We have For y = c i, we have h(x, y) = n k (P yy () P xy ())d 0 = n k ( p y(i)y(i) () 0 h(x, c i ) = n k (p k ii() 0 p x(i)y(i) ())d. p x(i)y(i) ())d. Thus, we ge an esimaion of he mean coalescen imes H(x, c i ) by h(x, c i ) or he above inegral. I is ineresing o look a a special case ha in he beginning of he coalescen random walk here are r persons sanding on each verex. Le x 0 be he map represen he iniial configuraion of here rn persons. Then he inegral becomes n h(x 0, c i ) = n rn (p rn ii () p r ji())d. 0 When r is one, ha means, each verex has one person in he beginning, we have n h(x 0, c i ) = n n (p n ii() p ji ())d. To calculae his inegral, we diagonalize he Laplacian of G. Wrie 0 j=1 j=1 U T QU = diag [ λ 1, λ 2,, λ n ], where U = (u ij ) n n is an orhogonal marix. Then Since ( n n p ii () n = u k) 2 ike λ = and k=1 n p ki () = k=1 1 k 1,k 2,,k n n p ij () = n u ik u jk e λk. k=1 1 k 1,k 2,,k n n u 2 ik 1 u 2 ik 2 u 2 ik n e (λ k 1 +λ k2 + +λ kn ) u 1k1 u ik1 u 2k2 u ik2 u nkn u ikn e (λ k1 +λ k2 + +λ kn ),

we ge h (Id, c i ) = n n COALESCENT RANDOM WALKS ON GRAPHS 13 1 k 1,k 2,,k n n u 2 ik 1 u 2 ik 2 u 2 ik n u 1k1 u ik1 u 2k2 u ik2 u nkn u ikn λ k1 + λ k2 + + λ kn. Example. Le G be he riangle graph. I is regular wih n = 3 and d = 2. The marix Q and U are given below: Q = 2 1 1 1 2 1 3 1 6 1 2 1 and U = 0 6 2 3 1. 1 1 2 1 2 1 3 1 6 We have U T QU = diag[ 3, 3, 0]. Since λ 3 = 0, when calculaing h(id, c i ), we should drop he erm (k 1, k 2, k 3 ) = (3, 3, 3). Noice ha he numeraor of his erm in h(id, c i ) is also zero, so i is fine o drop his erm. We have h(id, c 1 ) = 3 3 31 162 = 31 6. So H(Id, c 1 ) = 3 2 31 6 = 31. Thus, for our coalescen random walk on he riangle graph, i akes 31 seps on average for 3 persons o mee a any specified verex, given ha hey all sar a differen verices. For he square graph wih n = 4 and d = 2, a similar calculaion shows h(id, c 1 ) = 4 4 167 1120 = 1336 35 and H(Id, c 1 ) = 8 1336 35 305.371. Thus, for our coalescen random walk on he square graph, i akes abou 305 seps on average for 4 persons o mee a any specified verex, given ha hey all sar a differen verices. Acknowledgemen 1. The firs auhor would like o hank Xiao-Song Lin for his suggesions, and acknowledge he suppor from he Naional Science Foundaion upon agreemen No.0112050. References [1] J. Tian and X.-S. Lin, Coninuous ime Markov processes on graphs, Submied. [2] J.F.C. Kingman, The coalescen, Sochasic Processes and Their Applicaions, 13(1982), pp.235-248. [3] J.F.C. Kingman, On he genealogy of large populaions, Jour. Appl. Prob. 19A(1982) pp.27-43. [4] B. Bollobás, Modern Graph Theory, Springer, New York, 1998.

14 JIANJUN PAUL TIAN AND ZHENQIU LIU [5] F. Buckley and F. Harary, Disance in Graphs, Addison-Wesley, 1990. [6] K.-L. Chung, Markov Chains wih Saionary Transiion Probabiliies, Springer-Verlag, Berlin, 1967. [7] C. Godsil and G. Royle, Algebraic Graph Theory, Springer, New York, 2001. [8] L. Lovász, Random walks on graphs: a survey, Bolyai Sociey Mahemaical Sudy, Combinorics, 2(1993), pp. 1 46. [9] R. Syski, Passage Times for Markov Chains, IOS Press, Amserdam, 1992. Mahemaical Biosciences Insiue,, The Ohio Sae Universiy, Columbus, OH 43210, USA E-mail address: ianjj@mbi.ohio-sae.edu (J. Paul Tian) Deparmen of Saisics, The Ohio Sae Universiy, Columbus, OH 43210, USA E-mail address: liu@sa.ohio-sae.edu (Z. Liu)