The cover time of sparse random graphs.

Similar documents
The cover time of random walks on random graphs. Colin Cooper Alan Frieze

ON SUBGRAPH SIZES IN RANDOM GRAPHS

. p.1. Mathematical Models of the WWW and related networks. Alan Frieze

HAMILTON CYCLES IN RANDOM REGULAR DIGRAPHS

How many randomly colored edges make a randomly colored dense graph rainbow hamiltonian or rainbow connected?

On the threshold for k-regular subgraphs of random graphs

Lecture 7: February 6

Probability & Computing

Induced subgraphs of Ramsey graphs with many distinct degrees

HAMILTON CYCLES IN A CLASS OF RANDOM DIRECTED GRAPHS

A simple branching process approach to the phase transition in G n,p

The expansion of random regular graphs

Component structure of the vacant set induced by a random walk on a random graph. Colin Cooper (KCL) Alan Frieze (CMU)

Small subgraphs of random regular graphs

Loose Hamilton Cycles in Random k-uniform Hypergraphs

Sharp threshold functions for random intersection graphs via a coupling method.

The cover time of the giant component of a random graph

Multiple random walks in random regular graphs

Balanced Allocation Through Random Walk

Rainbow Connection of Random Regular Graphs

Min-Wise independent linear permutations

Rainbow Hamilton cycles in uniform hypergraphs

The concentration of the chromatic number of random graphs

Cover time of a random graph with given degree sequence

Rainbow Connection of Random Regular Graphs

Vertex colorings of graphs without short odd cycles

List coloring hypergraphs

Rainbow Hamilton cycles in uniform hypergraphs

Generating random spanning trees. Andrei Broder. DEC - Systems Research Center 130 Lytton Ave., Palo Alto, CA

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS

Approximate Counting and Markov Chain Monte Carlo

Lecture 6: September 22

Matchings in hypergraphs of large minimum degree

SIZE-RAMSEY NUMBERS OF CYCLES VERSUS A PATH

Model Counting for Logical Theories

Minimum-cost matching in a random bipartite graph with random costs

Adding random edges to create the square of a Hamilton cycle

The diameter of randomly perturbed digraphs and some applications

An alternative proof of the linearity of the size-ramsey number of paths. A. Dudek and P. Pralat

Connectivity of random cubic sum graphs

Lecture 5: January 30

RANDOM GRAPHS. Joel Spencer ICTP 12

A RANDOM VARIANT OF THE GAME OF PLATES AND OLIVES

Random Graphs. Research Statement Daniel Poole Autumn 2015

Convergence of Random Walks and Conductance - Draft

On judicious bipartitions of graphs

MIXING OF FINITE GEOMETRIC RANDOM WALKS AND THE CUTOFF PHENOMENON (EXTENDED ABSTRACT)

On the chromatic number of random graphs with a fixed degree sequence

The Turán number of sparse spanning graphs

Bounds on the generalised acyclic chromatic numbers of bounded degree graphs

A NOTE ON THE TURÁN FUNCTION OF EVEN CYCLES

The typical structure of sparse K r+1 -free graphs

Lecture 19: November 10

Sets that are connected in two random graphs

Generating and counting Hamilton cycles in random regular graphs

Bisections of graphs

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Theoretical Computer Science. The hitting and cover times of random walks on finite graphs using local degree information,

Essentials on the Analysis of Randomized Algorithms

On decomposing graphs of large minimum degree into locally irregular subgraphs

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods

The giant component in a random subgraph of a given graph

AALBORG UNIVERSITY. Total domination in partitioned graphs. Allan Frendrup, Preben Dahl Vestergaard and Anders Yeo

The Markov Chain Monte Carlo Method

Almost all graphs with 2.522n edges are not 3-colorable

Quasi-randomness is determined by the distribution of copies of a fixed graph in equicardinal large sets

Many cliques in H-free subgraphs of random graphs

Randomized Time Space Tradeoffs for Directed Graph Connectivity

Induced subgraphs of prescribed size

The maximum edge biclique problem is NP-complete

FIRST ORDER SENTENCES ON G(n, p), ZERO-ONE LAWS, ALMOST SURE AND COMPLETE THEORIES ON SPARSE RANDOM GRAPHS

Online Balanced Graph Avoidance Games

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Decomposition of random graphs into complete bipartite graphs

k-regular subgraphs near the k-core threshold of a random graph

Randomly coloring constant degree graphs

Conductance and Rapidly Mixing Markov Chains

Let V t ;t=0; 1; 2;:::denote the set of vertices which have at the beginning of step t. Thus V 0 = fv 0 g. Clearly jv t+1 j» 2jV t j, and so if jv j =

Forcing unbalanced complete bipartite minors

Random Graphs III. Y. Kohayakawa (São Paulo) Chorin, 4 August 2006

THE STRANGE LOGIC OF RANDOM GRAPHS. Joel Spencer. Courant Institute

Packing k-partite k-uniform hypergraphs

Constructive bounds for a Ramsey-type problem

The Lopsided Lovász Local Lemma

On the complexity of approximate multivariate integration

ACO Comprehensive Exam October 14 and 15, 2013

Testing Expansion in Bounded-Degree Graphs

On directed versions of the Corrádi-Hajnal Corollary

Polynomial-time counting and sampling of two-rowed contingency tables

On the cover time of dense graphs

The Monte Carlo Method

Probabilistic Combinatorics. Jeong Han Kim

The Lovász Local Lemma : A constructive proof

Approximate Counting

Percolation in General Graphs

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

arxiv: v1 [math.co] 22 Aug 2014

University of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods. Lecture 7: November 11, 2003 Estimating the permanent Eric Vigoda

Katarzyna Mieczkowska

The diameter and mixing time of critical random graphs.

Transcription:

The cover time of sparse random graphs Colin Cooper Alan Frieze May 29, 2005 Abstract We study the cover time of a random walk on graphs G G n,p when p = c n, c > 1 We prove that whp the cover time is asymptotic to c c c 1 1 Introduction Let G = V, E be a connected graph, let V = n, and E = m For v V let C v be the expected time taken for a simple random walk W on G starting at v, to visit every vertex of G The cover time C G of G is defined as C G = max v V C v The cover time of connected graphs has been extensively studied It is a classic result of Aleliunas, Karp, Lipton, Lovász and Rackoff [1] that C G 2mn 1 It is also known see Feige [7], [8], that for any connected graph G 1 o1n C G 1 + o1 4 27 n3 In this paper we study the cover time of the random graph, G G n,p It was shown by Jonasson [11] that whp 1 a C G = 1 + o1n if np b If c > 1 is constant and np = c then C G > 1 + αn for some constant α = αc Thus Jonasson has shown that when the expected average degree n 1p grows faster than, a random graph has the same cover time whp as the complete graph K n, whose cover time is determined by the Coupon Collector problem Whereas, when np = Ω this is not the case In this paper we sharpen Jonasson s results for the case np = c where ω = c 1 This condition on ω ensures that whp G n,p is connected, see Erdős and Rényi [6] Department of Mathematical and Computing Sciences, Goldsmiths College, London SW14 6NW, UK Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh PA15213 Supported in part by NSF grant CCR-9818411 1 A sequence of events E n is said to occur with high probability whp if lim n PrE n = 1 1

Theorem 1 Suppose that np = c = + ω where ω = c 1 and c=o1 If G G n,p, then whp c C G c c 1 It follows from the results of Broder and Karlin [4] that if c 1 = Ω1 then whp C G = On This is because G n,p is then an expander whp Thus it was known that whp C G = Θn for this value of p Our paper improves on previous results by giving asymptotically exact values for the cover time When c 1 = Ω1 we give the exact constant of multiplication in C G = Θn Also, when c 1 = ω where ω = o we establish the novel result that C G n log The challenge is to find a technique which can make an accurate average case analysis of the random walk In the next section we give some properties that hold whp in G n,p In Section 4 we show that a graph with these properties has a cover time described by Theorem 1 2 Outline of proof In Section 3 we describe the properties of G n,p that will be needed for our proof We call graphs with these properties typical and estimate the cover time of typical graphs For a walk starting at vertex u and a vertex v u and a time s, we define A s v to be the event that W u has not visited v by time s We then argue that C u t + PrA s v 1 v V s>t for any t > 0, see equation 14 We then write PrA s v = s PrA i v A i 1 v 2 and spend much of the time estimating PrA i v A i 1 v for i not too small i=1 Heuristically, a random walk of length s under the conditioning A s v should be much like a random walk on H = G[V \ {v}] Now the probability of A s v when we forbid a visit to v in the first s 1 steps is close to w N G v d H w 2 EH 1 d H w Here, d H w is the degree of v in H and d Hw 2 EH is the steady state of a random walk on H N Gv is the set of neighbours of v in G If H is an expander and s much larger than the mixing time then the sum of these probabilities is the probability that W u is at a neighbour of v at time s 1 1 The factor degree H w is then the probability we move to v at the next step The main part of the analysis in Lemma 3 goes to show that PrA i v A i 1 v 1 dv 2m 2

for large i This is then used in 2 to estimate PrA s v from above and below The upper bound follows from 1 after some calculation The lower bound uses the Chebyshev inequality and requires the estimate PrA s v 1 A s v 2 PrA s v 1 PrA s v 2 for suitably chosen pairs v 1, v 2 3 Properties of G n,p Let dv be the degree of vertex v V and let δ, denote the minimum and maximum degree Let δ v 2 be the minimum degree of a neighbour of v, excluding neighbours of degree one Let distu, v denote the distance between the vertices u, v of the graph G Let c = cn > 1 where c > 1 We say that a graph G is typical if it has the structural properties P0 P7 given below The proof of the following lemma is given in the Appendix Lemma 1 Let p = c n where ω = c 1 and c=o1 Then whp G G n,p is typical P0: G is connected P1: G 0 = c + 10 and δg { 1 c 1 + e 500 α c > 1 + e 500 where α = α /2 and α > e 600 satisfies c 1 = α logce/α P2: Call a vertex small if its degree is at most /20 There are at most n 1/3 small vertices and no two small vertices are within distance of each other log 2 P3: For L V, L 4, let H = G[V \ L] be the subgraph of G induced by V \ L For S V \ L let e H S, S be the number of edges of H with one end in S and the other in S = V \ L S For all H G such that δh 1, and for all S V \ L, S n/2, e H S, S d H S 1 6 P4: Let Dk be the number of vertices of degree k in G Let n 1 Dk = n p k 1 p n 1 k k denote the expected size of Dk in G n,p Define K 0 = {k [1, 0 ] : Dk 2 } K 1 = {1 k 15 : 2 Dk log } K 2 = {k [16, 0 ] : 2 Dk 2 } K 3 = [1, 0 ] \ K 0 K 1 K 2 3

P4a: If k K 3 then 1 2Dk Dk 2Dk, and = 0 k K 0 Dk log 2 k K 1 4 k K 2 P4b: If ω 2/3 then K 1 = and min{k K 2 } 1/2 and K 2 = Olog P5: The number of edges m = mg of G satisfies m 1 2 cn n 1/2 P6: Let k = c 1, V = {v : dv = k } and let B = {v V 10 : distv, w for some w V, w v} Then log 2 Let X = {v : δ v α } Then V 1 2 Dk and B 1 10 Dk V X 1 10 Dk P7 Call a cycle short if its length is at most 10 log The minimum distance between two short cycles is at least and the minimum distance between a small vertex and a short cycle is at least log 10 log P8: G has at least one triangle and at least one 5-cycle 4 The cover time of a typical graph In this section G denotes a fixed graph with vertex set [n] which satisfies P0 P8 and u is some arbitrary vertex from which a walk is started For a subgraph H of G let W u,h denote a random walk on H which starts at vertex u and let W u,h t denote the walk generated by the first t steps Let X u,h t be the vertex reached at step t and let P t u,h v = PrX u,ht = v Let π u,h v be the steady state probability of the random walk W u,h Note that properties P1 and P8 guarantee that the Markov chain associated with the walk is irreducible and aperiodic and therefore it has a steady state For an unbiased random walk on a connected graph H with mh edges, π H v = π u,h v = d Hv 2mH where d Hv denotes degree in H Let H = Hv = G[V \ {v}] if v is not a neighbour of a vertex w of degree 1, and let Hv = G[V \ {v, w}] if v has a neighbour w of degree 1 Note that P2 rules out a vertex having two neighbours of degree 1 For a subgraph H let N H v be the neighbourhood of v in H ie N H v = N G v V H When H = G we drop the H from the above notation and often drop the u as well Lemma 2 Let G be typical, then there exists a sufficiently large constant K > 0 such that if τ 0 = K then for all v V, and for all u, x H = Hv, after t τ 0 steps P t u,h x π u,hx = On 10 3 4

Proof The conductance Φ of the walk W u,h is defined by ΦW u,h = min πs1/2 e H S : S d H S It follows from P3 that the conductance Φ of the walk W u,h satisfies Φ 1 6 Now it follows from Jerrum and Sinclair [10] that P t u,h x π u,hx = O t n 1/2 1 Φ2 4 2 For sufficiently large K, the RHS above will be On 10 at τ 0 We remark that there is a technical point here The result of [10] assumes that the walk is lazy, and only makes a move to a neighbour with probability 1/2 at any step This leaves the steady state distribution unchanged, halves the conductance and 4 remains true For us it is sufficient simply to keep the walk lazy for 2τ 0 steps until it is mixed This is negligible compared to the cover time For v u V, let A t v be the event that W u,g t does not visit v Lemma 3 a If t > 2τ 0 and δ v 2 then PrA t v PrA t v 1 1 δv 1 2 t 2τ0 1 dv O PrA 2τ0 δ v 2m v 2 t 2τ0 δv 1 dv + O PrA 2τ0 δ v 1 2m v b Suppose that v, v V \ X see P6 and that distv, v > 10 Then log 2 1 k PrA 2τ0 v A 2τ0 v t 2τ0 1 1 + O m Proof The main idea of the proof is to show that the variation distance between a random walk of length τ 0 on H and a random walk on G, conditioned not to visit v is sufficiently small a Fix w v and y N H v Let W k y denote the set of walks in Hv which start at w, finish at y, are of length 2τ 0 and which leave a vertex in the neighbourhood N H v exactly k times Note that the walk can leave y N H v without necessarily leaving N H v Let W k = y W ky and let W = w 0, w 1,, w 2τ0 W k y Let ρ W = PrX w,gs = w s, s = 0, 1,, 2τ 0 PrX w,h s = w s, s = 0, 1,, 2τ 0 5 Then δv 1 k 1 ρ W δ v 5

This is because PrX w,h s = w s X w,h s 1 = w s 1 PrX w,g s = w s X w,g s 1 = w s 1 = { 1 ws 1 / N G v d G w s 1 d G w s 1 1 If E = {X w,g τ v, 0 τ 2τ 0 } then PrE = PrW w,g 2τ 0 = W k 0 W W k = ρ W PrW w,h 2τ 0 = W k 0 W W k where p k = We will show in Lemma 4, below, that k 0 p k δv 1 δ v k W W k PrW w,h 2τ 0 = W = PrW w,h 2τ 0 W k w s 1 N G v p 0 + p 1 + p 2 1 O 1 6 which immediately implies that PrE p 0 + p 1 1 1 + p 2 1 1 2 1 1 2 O 1 δ v δ v δ v Now fix y and write PrX w,g 2τ 0 = y E = k 0 = k 0 W W k y W W k y PrW w,g 2τ 0 = W PrE 1 ρ W PrW w,h 2τ 0 = W PrE 1 Now if p k,y = PrW w,h W k y PrX w,h 2τ 0 = y then = PrW w,h 2τ 0 leaves a vertex of N H v k times X w,h 2τ 0 = y k 0 We will show in Lemma 4, below, that δv 1 k p k,y PrX w,g2τ 0 = y E PrE 1 δ v PrX w,h 2τ 0 = y p 0,y + p 1,y + p 2,y 1 O 1 7 and so δv 1 δ v 2 1 O PrX w,g 2τ 0 = y E PrX w,h 2τ 0 = y δv δ v 1 2 1 + O 6

Taking w as X u,g t 2τ 0 1, and conditioning on A t 2τ0 1v, we deduce that δv 1 δ v 2 1 O PrX u,g t 1 = y A t 1 v PrX w,h 2τ 0 = y δv δ v 1 2 1 + O Therefore 2 δv 1 PrA t v A t 1 v 1 + O P 2τ 0 δ v 1 w,hv y 1 dy y N H v 2 δv 1 dy 1 1 1 = 1 + O δ v 1 2m 2dv + O n 10 dy y N H v 2 δv 1 1 1 + O dv 1 δ v 1 2m 2dv dy y N H v 2 δv 1 dv = 1 + O δ v 1 2m Here we use P2 to see that y N H v Similarly, 1 dy 40dv PrA t v A t 1 v 1 δv 1 δ v 2 1 dv O 2m and the lemma follows immediately Lemma 4 Equations 6,7 are valid Proof Clearly, we only need to prove 7 and so fix y N H v The main idea is to show that a random walk of length 2τ 0 from w to y is close in distribution to a random walk of length τ 0 from w to x followed by the reversal of a random walk W 3 of length τ 0 from y to x, where x is chosen from the the steady state of a walk Each of W 1, W 3 is basically a random walk of length τ 0 and it easy to estimate the number of returns to N H v for such walks Let Wa, b, t denote the set of walks in H from a to b of length t and for W Wa, b, t let PrW = PrW a,h t = W Then for x V H we have PrX w,h τ 0 = x X w,h 2τ 0 = y = and with W 3 equal to the reversal of W 2, W 1 Ww,x,τ 0 W 2 Wx,y,τ 0 = π 1 x,h W 1 Ww,x,τ 0 W 2 Wx,y,τ 0 PrW 1 PrW 2 PrWw, y, 2τ 0 PrW 1 π x,h PrW 2 PrWw, y, 2τ 0 7

= π 1 x,h π y,h = = W 1 Ww,x,τ 0 W 3 Wy,x,τ 0 PrW 1 PrW 3 PrWw, y, 2τ 0 π 1 x,h π y,h PrWw, y, 2τ 0 PrWw, x, τ 0PrWy, x, τ 0 π 1 x,h π y,h PrWw, y, 2τ 0 π x,h On 10 2 = π x,h On 9 It follows that the variation distance between X w,h τ 0 and a vertex chosen from the steady state distribution π H is On 8 Now given x = X w,h τ 0, W w,h τ 0 is a random walk of length τ 0 from w to x and W 2 = x = X w,h τ 0, X w,h τ 0 + 1,, y = X w,h 2τ 0 is a random walk of length τ 0 from x to y For W Wy, x, τ 0 let QW be the probability that y, X w,h 2τ 0 1,, X w,h τ 0 = W Then we have Thus if W = w 1, w 2,, w τ0 = x then QW = 1 + On 8 π x,hprw reverse PrWx, y, τ 0 = 1 + On 8 π y,h PrW PrWx, y, τ 0 = 1 + On 8 π y,h π x,h PrW π x,h PrWx, y, τ 0 = 1 + On 8 π y,h π x,h PrW π y,h PrWy, x, τ 0 QW X w,h τ 0 = x = 1 + On 8 PrW PrWy, x, τ 0 and so the distribution of W reverse 2 is within variation distance On 8 of that of a random walk of length τ 0 from y to a vertex x chosen with distribution π H Thus the variation distance between the distribution of a random walk of length 2τ 0 from w to y and that of W 1, W3 reversed is On 8 where W 1, W 3 are obtained by i choosing x from the steady state distribution and then ii choosing a random walk W 1 from w to x and a random walk W 3 from y to x Furthermore, the variation distance between the distribution of W 1 and a random walk of length τ 0 from w is On 9 Similarly, the variation distance between distribution of W 3 and a random walk of length τ 0 from y is On 9 Now consider W 1 and let Z t be the distance of X w,h t from v We observe from P2 and P7 that except for at most one value ā J = [1, ] we have 2log 2 and this will enable us to prove PrZ t+1 = a + 1 Z t = a 1 20, a I \ ā PrW 1 or W 3 make a return to N H v = O1/ 8 and this implies 7 Note that a move from N H v to N H v has to be counted as a return here 8

To prove 8, let t 0 be the first time that W 1 visits N H v We have to estimate the probability that W 1 returns to N H v later on and so we can assume wlog that w N H v ie Z 0 = 1 It follows from P2 and P7 that PrZ i = i + 1, i = 1,, 6 Z 0 = 1 1 40 6 9 To check this consider two possiblilites: Let N 7 v denote the set of vertices within distance 7 or less of v in G a N 7 v does not contain a small vertex Since there is at most one edge joining two vertices in N 7 v, we see that PrZ i+1 > Z i = 1 40 for i = 1,, 6 and 9 follows b On the other hand, if there is a small vertex x in N 7 v then with probability 1 20 the first move from w takes us further away from x and 9 follows as before If Z 3 = 4 and there is a return to N H v then there exists τ τ 0 such that Z τ = 4, Z τ+1 = 3 and Z τ+2 3 If there is no small vertex within distance 4 of v then P2 and P7 imply τ0 Pr τ τ 0 : Z τ = 4, Z τ+1 = 3, Z τ+2 3 = O 2 10 If there is a unique small vertex within distance 4 of v and Z 6 = 7 and there is a return to N H v then there exists τ τ 0 such that Z τ = 7, Z τ+1 = 6 and Z τ+2 = 5 no short cycles close to v now We can then argue as in 10 that the probablity of this O τ0 2 This completes the proof of part a of the lemma b We simply run through the proof as in a, replacing v by v, v : H = Hv, v = G {v, v }, N H v, v = N G v N G v The proof of 7 remains valid because v, v are far apart 41 The upper bound on cover time From here on, A 1, A 2, are a sequence of unspecified positive constants Let t 0 = 2m log c c 1 We now prove for typical graphs, that for any vertex u V C u t 0 + om 11 Let T G u be the time taken to visit every vertex of G by the random walk W u Let U t be the number of vertices of G which have not been visited by W u at step t We note the following: PrT G u > t = PrU t > 0 min{1, E U t }, 12 C u = E T G u = t>0 PrT G u > t 13 It follows from 12,13 that for all t C u t + E U s = t + PrA s v 14 s>t v V s>t 9

Now, by Lemma 3, for s > 2τ 0, PrA s v where α is as in P1 Then from P4, δv 1 2 1 A 1 δ v exp sdv 1 A 2 2m s 2τ0 dv PrA 2τ0 2m v, if δ v α E U s 3 i=1 k K i dv=k v / X PrA s v + v X PrA s v 3 Dk exp kdv 1 A 2 + PrA s v 2m i=1 k K i v X T 3 s + T 1 s + T 2 s + T X s 15 where and n 1 T 3 s = 2 n k=1 T i s = n 1 k k K i Dke p k 1 p n 1 k e sk 2m sk 1 A 2 2m 1 A 2,, i = 1, 2 T X s = δv 1 2 1 dv 1 O δ v 2m v X 2 { δv } 1 2 exp A 3 sdv δ v 2m v X s 2τ0 Now for γ > 0, s=t 0 +1 e γs = 1 e γ 1 e γt 0 γ 1 e γt 0 16 10

Let λ = t 0 2m 1 A 2 Applying 16 we get s=t 0 +1 We have used the estimation, n 1 n T 3 s 3m k k=1 n 1 k p k 1 p n k 1 e kλ 6 m n 1 n p eλ p k+1 1 p n k 1 e k+1λ k + 1 k=1 < 7 m c 1 p + pe λ n p c 1 mn 7 c 1 e np+npe λ me 2A 2 8 c 1 = om 17 c 1 npe λ c c 1 + 1 c 1 1 + On 1 c 1 A2 / Note that we have used c 1 to get the second line Continuing we get s=t 0 +1 T 1 s A 4 m log 2 k k K 1 2A 2 1 + c 1 e kλ = om 18 since either i ω 2/3 and K 1 = or ii ω < 2/3 and e λ 1 o1 1/3 s=t 0 +1 T 2 s A 5 m 4 k k K 2 e kλ = om 19 since either i ω 2/3 and min{k K 2 } 1/2 and K 2 = Olog or ii ω < 2/3 and e λ 1 o1 1/3 11

Note now that δ v 2 and if v X see P6 then from P2 dv /20 Thus s=t 0 +1 T X s s=t 0 +1 v X v X v X v X 10m dv exp 200m exp 200m { exp sdv 10m { t 0dv 10m c 1 } } { t 0 200m c } /201 by P2 = om 20 since either i c 1+e 500 and X = or ii c < 1+e 500, in which case we use c 1/c e 500 As C G = max u V C u, the upper bound on C G now follows from 11, 15, 17, 18, 19, 20 and 14 with t = t 0 42 The lower bound on cover time For any vertex u, we can find a set of vertices S such that at time t 1 = t 0 1 ɛ, ɛ 0, the probability the set S is covered by the walk W u tends to zero Hence T G u > t 1 whp which implies that C G 1 o1t 0 We construct S as follows Let k, V, B be as defined in Property P6 Let S = V \ B X and let ɛ = 10 log c 1 log c/c 1 3 = o1 and δ = S Note that Dk = Ω n c 1 lnc/c 1 = Ω a 21 c 1 for any constant a > 0 Then P6 implies that S = Ω a for any constant a > 0 Now for v, w u let A t v, w be the event that W has not visited v or w by step t Let Q S be given by Q = {v S : PrA 2τ0 v < 1 δ, or PrA 2τ0 v, w < 1 δ 2, for some w S } Now in time 2τ 0, W can visit at most 2τ 0 + 1 vertices and so PrA 2τ0 v 2τ 0 + 1 and 2τ0 + 1 PrA 2τ0 v, w 2 v V v,w V Thus Q 2τ 0 + 1 δ + 2τ 02τ 0 + 1 21 1 δ 2 = o S 12

Therefore, if S = S \ Q, S Dk 3 Let St denote the subset of S which has not been visited by W by time t Now E St v S 1 1 + A 6 k t 2τ0 PrA 2τ0 2m v Setting t = t 1 we have n c 1 log c/c 1 E St 1 = Ω exp k c 1 2m t 1 nɛc 1 log c/c 1 = Ω c 1 = Ω 9 22 Let Y v,t be the indicator for the event that W u t has not visited vertex v at time t As v, w S are not adjacent, and have no common neighbours, when we delete v, w the total degree of Hv, w is 2m 2dv 2dw, and dv = dw = k It follows from Lemma 3b that for v, w S It follows therefore that from 22 and 23 E Y v,t1 Y w,t1 PrSt 1 0 E St 1 2 E St 1 2 = 1 k t1 2τ 0 1 1 + O m 1 + o1e Y v,t1 E Y w,t1 23 E S t1 S t1 1 E St 1 2 1 = 1 o1 + E S t1 1 Acknowledgement We thank Johan Jonasson for pointing out a significant error in earlier draft References [1] R Aleliunas, RM Karp, RJ Lipton, L Lovász and C Rackoff, Random Walks, Universal Traversal Sequences, and the Complexity of Maze Problems Proceedings of the 20th Annual IEEE Symposium on Foundations of Computer Science 1979 218-223 [2] BBollobás, Random graphs Second edition, Cambridge University Press 2001 [3] BBollobás, TFenner and AMFrieze, An algorithm for finding Hamilton paths and cycles in random graphs, Combinatorica 7 1987 327-341 [4] AZ Broder and AKarlin, Bounds on the cover time, Journal of Theoretical Probability 2 1989 101-120 [5] C Cooper and AM Frieze, Crawling on web graphs, to appear in Proceedings of STOC 2002 13

[6] P Erdős and A Rényi, On random graphs I, Publ Math Debrecen 6 1959 290-297 [7] U Feige, A tight upper bound for the cover time of random walks on graphs, Random Structures and Algorithms 6 1995 51-54 [8] U Feige, A tight lower bound for the cover time of random walks on graphs, Random Structures and Algorithms 6 1995 433-438 [9] SJanson, T Luczak and ARuciński, Random graphs, Wiley 2000 [10] M Jerrum and A Sinclair The Markov chain Monte Carlo method: an approach to approximate counting and integration In Approximation Algorithms for NP-hard Problems D Hochbaum ed PWS 1996 482-520 [11] J Jonasson, On the cover time of random walks on random graphs, Combinatorics, Probability and Computing, 7 1998, 265-279 5 Appendix: Typical graph properties A proof of P0,P1 can be found in Bollobás [2] or Janson, Luczak and Ruciński [9] A proof of P2 can be found in [3] P3: Case of 1 s = S n/c We first prove that whp e G S, S s log Now Pr S : e G S, S s log n s 2 ne s p s log spe s s log s 2 log 2n log exp s log log e cse s = on 2 s log By property P0 and the definition of H, both G and H contain no isolated vertices and hence ds > 0 We write es, S/dS = 1 2eS, S/dS Partition S into sets S 1 and S 2, where S 1 are the vertices of S of degree at most /10 Let T 1 be the neighbour set of S 1 in S 2 and let T 2 be the neighbour set of S 1 in S By property P1 the set S 1 induces no edges, and the neighbours of vertices of S 1 are distinct Thus 2e H S, S d H S 2 T 1 + S 2 log 2 T 1 + T 2 + S 2 /10 L 2 + log /10 = o1 Now use d H S = e H S, S + 2e H S, S Case of n/c s n/2 14

The expected value of e H S, S is at least µ = sn s 4p Thus from Chernoff bounds, for fixed s, n Pr S : e H S, S µ/2 e c sn s 4 9 n s exp s = on 2 We note that E d H S = 2 s 2 p + sn s L p Thus Pr S : d H S 3 n 2 E d HS s exp c 18 e c 20 s s = on 2 c 20 ne log s ne log s Thus e H S, S d H S 1 2 3 2 2 s 2 sn s 4p 1 p + sn sp 6 P4a: First observe that Pr k K 0 : Dk > 0 Dk = K 0 1 2 = O k K 0 Then Similarly, Pr k K 1 : Dk > log 2 k K 1 Pr k K 2 : Dk > 4 k K 2 Dk log 2 = O 1 log Dk 1 4 = O A simple calculation gives that for our range of values of p E DkDk 1 = Dk 1 2 + O n Thus VarDk = Dk 1 + O Applying the Chebychef inequality we see that Dk n PrDk 1 4VarDk Dk or Dk 2Dk = 4 2 Dk 2 Dk 1 + O So, as K 3 = O, Pr k K 3 : Dk 1 1 2 Dk or Dk 2Dk = O 15 n

P4b: The sequence Dk, k 0 is unimodal and Dk + 1 Dk c k + 1 when k = O 24 Moreover, for k 0 there is a positive constant A = Ak such that ce k Dk An 1 c 1 25 k k1/2 Suppose first that there exists k K 1 K 2 such that k < 1/2 It follows from 25 that c 1 < 1/3, for if c 1 1/3 and k < 1/2 then Dk = o 2 Now suppose that k K 2 implies k 1/2 Observe from 25 that both D c c 1/3 and D c + c 1/3 are much greater than 2 Thus either k c c 1/3 or k c + c 1/3 In either case, we see from iterating 24 that K 2 = Olog P5: This follows immediately from Chernoff bounds P6: From 21 we see that c 1 K 3 for c constant That V 1 2 Dk now follows from P3 Now B {v, w V 2 10 : distv, w d = } Therefore log 2 E B Dk 2 d n k p k+1 = odk, k=1 and the second part of P6 follows from the Markov inequality The third part is a similar first moment calculation P7: A proof of similar results can be found in [3] P8: The expected number of triangles tends to infinity and we can use the Chebychef inequality to show that one exists whp The same argument will work for 5-cycles 16