Convergence Rates for Generalized Descents

Size: px
Start display at page:

Download "Convergence Rates for Generalized Descents"

Transcription

1 Convergence Rates for Generalized Descents John Pike Department of Mathematics University of Southern California Submitted: March 13, 011; Accepted: December 10, 011; Published: XX Mathematics Subject Classification: 60F05 Abstract d-descents are permutation statistics that generalize the notions of descents and inversions. It is known that the distribution of d-descents of permutations of length n satisfies a central limit theorem as n goes to infinity. We provide an explicit formula for the mean and variance of these statistics and obtain bounds on the rate of convergence using Stein s method. Introduction For π S n, the symmetric group on [n] {1,,..., n}, and 1 d < n, we say that a pair (i, j) with i < j i + d and π(i) > π(j) is a d-descent of π and write Des d (π) {(i, j) [n] : i < j i + d, π(i) > π(j)}, the number of d-descents of π. (Note that Des 1 (π) Des(π) is the number of descents in π and Des n 1 (π) Inv(π) is the number of inversions.) These permutation statistics first appeared in [] where they were related to the Betti numbers of Hessenberg varieties. In 008, Miklós Bóna used Janson s criterion to show that the distribution of the number of d-descents of permutations of length n converges to a normal distribution as n goes to infinity [1]. In 00, Jason Fulman was able to provide convergence rates for the cases d 1 and d n 1 using Stein s method techniques [6, 10]. In this paper, we carry out analogous computations to get convergence rates for general d-descents both for arbitrary fixed values of d and when d grows with n (excluding a certain exceptional regime). Because of a recent theorem due to Adrian Röllin regarding the necessity of exchangeability conditions in Stein s method [9], we are able to avoid some of the technical arguments used in Fulman s proof. We also improve upon Bóna s formula for the variance of d-descents. Essentially, this paper serves to compile and generalize the results of [1] and [6] and to clarify the underlying arguments for future reference. It also illustrates the utility and applications of Röllin s theorem. the electronic journal of combinatorics 18 (011), #P36 1

2 Mean and Variance of d-descents To begin, we define Y n,d to be the random variable on S n (equipped with uniform probability measure P ) given by Y n,d (π) Des d (π). Observe that for n, 1 d < n, the number of (i, j) [n] with i < j i + d is N n,d d(n d) + (d 1) + (d ) nd d d. Ordering such pairs lexicographically - that is, (i, j) (r, s) if i < r or i r and j < s - and indexing them by {(i k, j k )} N n,d k1, we can write Des d(π) N n,d { k1 1(π(i k) > 1, π(i) > π(j) π(j k )) where 1(π(i) > π(j)) is the indicator of the event {π(i) > 0, π(i) < π(j) π(j)}. Letting Xn,d k be the random variable defined by Xk n,d (π) 1(π(i k) > π(j k )) gives Y n,d N n,d k1 Xk n,d. Since P (π(i k) > π(j k )) 1 for all (i, j) {(i k, j k )} N n,d k1, the Xk n,d s are Bernoulli( 1), thus E[Xk n,d ] 1 and Var(Xk n,d ) 1. Accordingly, N n,d E[Y n,d ] k1 E[X k n,d] N n,d nd d d and N n,d Var(Y n,d ) Var(Xn,d) k + k1 N n,d + 1 k<l N n,d Cov(X k n,d, X l n,d) 1 k<l N n,d Cov(X k n,d, X l n,d). Now when {i k, j k } {i l, j l }, X k n,d and Xl n,d are independent, so Cov(Xk n,d, Xl n,d ) 0. As such, our assumptions on the ordering of the indices imply that the only nonzero summands correspond to the cases i k i l, j k j l, and j k i l. Because Cov(X k n,d, Xl n,d ) E[X k n,d Xl n,d ] E[Xk n,d ]E[Xl n,d ] E[Xk n,d Xl n,d ] 1, we just have to compute E[Xk n,d Xl n,d ] for each of these cases and count the number of ways each case can occur. For the case i k i i l, we have that E[X k n,d Xl n,d ] P (π(i) > π(j k), π(j l )) 1 3 and the number of triplets (i, j k, j l ) [n] 3 with i < j k < j l i + d is ( ) d d 1 (n d) + t ( ) t (d d)(3n d ). 6 If j k j j l, then E[Xn,d k Xl n,d ] P (π(j) < π(i k), π(i l )) 1 and the number of triplets 3 (i k, i l, j) [n] 3 with i k < i l < j i k + d is also the same as in the i k i i l case. If i k < j k m i l < j l, then E[Xn,d k Xl n,d ] P (π(i) > π(m) > π(j l)) 1. Let M 6 denote the number of triplets (i k, m, j l ) [n] 3 with i k < m i k + d and m < j l m + d. the electronic journal of combinatorics 18 (011), #P36

3 If d n, then there are d choices for (i k, j l ) when d < m n d, there are (m 1)d choices for (i k, j l ) when m d, and there are d(n m) choices for (i k, j l ) when m > n d. Thus if d n, then d 1 M d (n d) + d t nd d 3 d. For d > n, there are (m 1)d choices for (i k, j l ) when m n d, there are d(n m) choices for (i k, j l ) when m > d, and there are (m 1)(n m) choices for (i k, j l ) when n d < m d. Thus if d > n, then M d mn d+1 The variance is thus t1 n d 1 (m 1)(n m) + d t 6n d 6nd 1nd n 3 + 3n n + d 3 + 6d + d. 6 Var(Y n,d ) N n,d N n,d + t1 1 k<l N n,d Cov(X k n,d, X l n,d) + [( )(d d)(3n d ) (1nd + 6nd 8d 3 9d d 1M). + ( )M] Substituting the values of M established in the preceding paragraph gives an explicit formula for Var(Y n,d ). To summarize, we have Theorem 1. If Y n,d is the number of d-descents in a random permutation of length n, then and E[Y n,d ] nd d d { 6nd+d 3 +3d 3 d, d n 7 Var(Y n,d ) n 3 1n d 6n +nd +30nd+n 1d 3 1d 9d, d > n. 7 the electronic journal of combinatorics 18 (011), #P36 3

4 Observe that d 1 yields Var(Des) n+1 1 and d n 1 gives Var(Inv) n(n 1)(n+5) 7. The value of the variance in the d n case is slightly different than that reported in [1] because a was omitted when recording the number of pairs in..3. This minor error does not affect the validity of the subsequent results. The author has been unable to find another instance of a general formula for the variance of d-descents in the literature, though as mentioned above, Miklós Bóna worked out the d n case completely and the preceding calculations follow the same basic reasoning. The general idea for calculating the variance can also be found in Lemma.3.1 in [6], but the calculations are only carried out for d 1 and d n 1. Convergence Rates This section is basically a generalization of Fulman s derivation of the rate of convergence for descents and inversions [6]. The crux of his proof relies upon the following theorem due to Rinott and Rotar [8]. Theorem (Rinott and Rotar). Let W, W be an exchangeable pair of real-valued random variables such that E[W W ] (1 λ)w for some λ (0, 1). If there exists a constant A such that W W A almost surely, then for all x R, where Φ(x) 1 π x P (W x) Φ(x) 1 Var(E[(W W ) λ W ] + 8 A3 λ + 8 A λ e t dt is the standard normal c.d.f. A substantial portion of Fulman s argument involves proving that the pair (W, W ) he constructed is indeed exchangeable. The derivation in this paper uses essentially the same pair of random variables, but we are spared the onus of establishing exchangeability thanks to the following adaptation of Rinott and Rotar s result discovered by Adrian Röllin [9]. Theorem 3 (Röllin). Suppose that W and W are a pair of real-valued random variables having common law such that E[W ] 0, Var(W ) 1, and E[W W ] (1 λ)w for some λ (0, 1). Suppose moreover that there is a constant A such that W W A almost surely. Then for all x R, P (W x) Φ(x) 1 Var(E[(W W ) λ W ] + 3 A3 λ + 6 A λ where Φ is the standard normal c.d.f. the electronic journal of combinatorics 18 (011), #P36

5 The original version of Röllin s theorem is a little more general. The above is an immediate corollary which is sufficient for our purposes. To facilitate the ensuing arguments, we define a modified version of Y n,d as follows: For each n, 1 d < n, consider the real, skew-symmetric n n matrix M(n, d) [M i,j (n, d)] n i,j1 given by 1, i < j i + d M i,j (n, d) 1, j < i i + d 0, otherwise and define the random variable Z n,d by Z n,d (π) i<j M π(i),π(j)(n, d). (For notational convenience, we will often abbreviate M i,j (n, d) M i,j when there is no danger of confusion.) Also, define Asc d (π) {(i, j) [n] : i < j i + d, π(i) < π(j)} to be the number of d-ascents of π so that N n,d Des d (π) + Asc d (π). Then Z n,d (π 1 ) i<j M π 1 (i),π 1 (j)(n, d) {(i, j) : i < j, π 1 (j) < π 1 (i) π 1 (j) + d} {(i, j) : i < j, π 1 (i) < π 1 (j) π 1 (i) + d}. Taking r π 1 (i), s π 1 (j) in the first term and s π 1 (i), r π 1 (j) in the second term, we see that Z n,d (π 1 ) {(r, s) : π(r) < π(s), s < r s + d} hence Z n,d (π) Des d (π 1 ) nd d d. {(r, s) : π(s) < π(r), s < r s + d} Des d (π) Asc d (π) Des d (π) N n,d, Writing Ỹn,d(π) Y n,d (π 1 ), we have Z n,d Ỹn,d nd d d. Now Ỹn,d and Y n,d have the same distribution because replacing π with π 1 merely amounts to relabeling the sample space. As such, E[Ỹn,d] E[Y n,d ], so it follows from Theorem 1 and the linearity of expectation that E[Z n,d ] E[Y n,d ] nd d d 0. Thus if we let W n,d Z n,d Var(Zn,d ) Ỹn,d nd d d Var(Ỹn,d) Ỹn,d nd d d, Var(Ỹn,d) then W n,d has mean zero and variance one. Moreover, since Ỹn,d and Y n,d have the same distribution, W n,d is distributed as Y n,d nd d d Var(Yn,d ). the electronic journal of combinatorics 18 (011), #P36 5

6 At this point, we need to construct a complementary random variable W in order to apply Theorem 3. To this end, let σ I S n be the cycle σ I (I, I + 1,..., n) and define W n,d by choosing I uniformly from [n] {1,,..., n} and setting W n,d (π) W n,d(πσ I ) where permutation multiplication is performed from right to left. For example, taking n 6, I, if π is given in two-line notation by π ( ), then πσ I ( Notice that W n,d and W n,d have the same distribution since right multiplication by σ I with I chosen uniformly from [n] corresponds to a random-to-end shuffle of π, and successive shuffles define a Markov chain with uniform stationary distribution [3]. Thus choosing π uniformly from S n, choosing I uniformly from [n], and taking the composition π πσ I is equivalent to choosing π uniformly from S n. In addition, we have the following lemma. Lemma 1. E[W n,d W n,d] (1 n )W n,d Proof. E[W n,d W n,d π] 1 n n [W n,d (πσ i ) W n,d (π)] i1 ). 1 n n 1 [M π(σi (j)),π(σ i (k)) M π(j),π(k) ] Var(Z n,d ) i1 (because σ n id). Now (σ i (j), σ i (k)) (j, k) when j, k < i, (σ i (j), σ i (k)) (j, k + 1) when j < i k < n, (σ i (j), σ i (k)) (j, i) when j < i, k n, (σ i (j), σ i (k)) (j +1, k +1) when i j < k < n, and (σ i (j), σ i (k)) (j + 1, i) when i j < n, k n. Thus, after some careful bookkeeping, we see that for each 1 i < n, and j<k {(π(σ i (j)), π(σ i (k))) : j < k} \ {(π(j), π(k)) : j < k} {(π(j + 1), π(i)) : j i} {(π(j), π(i)) : j > i} {(π(j), π(k)) : j < k} \ {(π(σ i (j)), π(σ i (k))) : j < k} {(π(i), π(k)) : k > i}, the electronic journal of combinatorics 18 (011), #P36 6

7 hence E[W n,d 1 W n,d π] n n 1 [M π(σi (j)),π(σ i (k)) M π(j),π(k) ] Var(Z n,d ) i1 j<k ( 1 n n 1 M π(i),π(j) ) M π(i),π(k) Var(Z n,d ) i1 i<j i<k 1 n n 1 M π(i),π(j) Var(Z n,d ) i1 i<j ( ) 1 M π(i),π(j) n Var(Zn,d ) n W n,d. As W n,d is a function of π and thus is σ(π)-measurable, we have E[W n,d W n,d W n,d ] E[E[W n,d W n,d π] W n,d ] E[ n W n,d W n,d ] n W n,d, hence E[W n,d W n,d] E[W n,d W n,d W n,d ] + W n,d (1 n )W n,d. It is worth remarking that this construction of a complementary random variable by applying some shuffling scheme to the input might be useful in analyzing other permutation statistics. Indeed, the preceding arguments show that for any random variable X X(g) defined on a finite group G with uniform probability measure, if µ is a probability measure on G that is not concentrated on a coset of a subgroup of G (see []), W X E[X], and W is defined by W (g) W (hg) where h is drawn from µ, then the Var(X) assumptions of Theorem 3 are satisfied whenever i<j E[W W g] h G[W (g) W (hg)]µ(h) λw for some λ (0, 1). More generally, since every Markov chain with finite state space Ω and transition kernel K(x, y) P (X n+1 y X n x) has a random mapping representation - that is, a Λ-valued random variable Z (with distribution µ) and a mapping f : Ω Λ Ω such that K(x, y) P (f(x, Z) y) - this procedure for finding a complementary random variable applies to all temporally homogeneous, finite state space, ergodic Markov chains by setting W (ω) W (f(z, ω)) where Z is drawn from µ. (See [7] for a discussion of random mapping representations.) If W and W are distributed as consecutive steps in a reversible Markov chain in equilibrium, then, in obvious notation, P (W x, W y) π(x)k(x, y) π(y)k(y, x) P (W y, W x), hence (W, W ) is an exchangeable pair. The point here is that Rollin s observations allow one to apply the machinery of Stein s method using a pair of similarly constructed random variables without requiring that the underlying chain is reversible provided that the above conditions are satisfied. the electronic journal of combinatorics 18 (011), #P36 7

8 Of course, one still needs to verify that E[W ω] (1 λ)w, in order to conclude that E[W W ] E[E[W ω] W ] E[(1 λ)w W ] (1 λ)w. Because E[W ω] E[W (f(ω, Z))] y Ω W (y)p (f(ω, Z) y) y Ω K(ω, y)w (y), the condition E[W ω] (1 λ)w implies that W is a (right) eigenfunction of K corresponding to the eigenvalue 1 λ. Thus this method is applicable precisely when the statistics in question are eigenfunctions of a Markov chain for which the underlying distribution is stationary. Lemma 1 and the above observation immediately yield the following corollary Corollary 1. For n > 1, d 1,..., n 1, the functions W n,d (π) are eigenfunctions for the random-to-end shuffle corresponding to the eigenvalue 1 n. Lemma 1 says we can take λ in the statement of Theorem 3. A sharp value for the n A term in Theorem 3 is given by the following lemma and a corollary gives a simplified version that suffices for the purposes of this paper. Lemma. W n,d W n,d and this is the best bound possible. 6 (6n 1)d, d n 1 +3+d 6 (n, d > n 3 +n 6n )d +(30n 1n 9)d 1 +n 1 1d Proof. For any permutation π π 1, π,..., π n and any I [n], Des d (σ 1 I π 1 ) nd d d W n,d (πσ I ) W n,d (π) Var(Yn,d ) Des d(π 1 ) nd d d Var(Yn,d ) 1 Desd (σ 1 I π 1 ) Des d (π 1 ). Var(Yn,d ) Now the number of d-descents in π 1 is equal to the number of pairs (j, k) [n] with j < k j + d such that π 1 (j) > π 1 (k). Reindexing [n] by m π(m), we see that this is equal to the number of pairs (j, k) with π j < π k π j + d such that j > k - that is, n 1 Des d (π 1 ) {k : π j < π k π j + d, k < j}. j1 Similarly, the number of d-descents in the inverse of π πσ I π 1,..., π I 1, π I+1,..., π n, π I is n 1 Des d ((π ) 1 ) {k : π j < π k π j + d, k < j}. j1 the electronic journal of combinatorics 18 (011), #P36 8

9 Since the relative ordering of the terms in the sequence π is the same as in π except that the I th term in π is the nth term in π, we have {k : π j < π k π j + d, k < j} {k : π j < π k π j + d, k < j} whenever π j π I > d, {k : π j < π k π j + d, k < j} {k : π j < π k π j + d, k < j} 1 for j such that π j < π I π j + d (of which there are at most d), and {k : π j < π k π j + d, k < I} {k : πj < π k π j + d, k < I} + d. Thus it follows from the above representations of Des d (π 1 ) and Des d ((π ) 1 ) that Des d ((π ) 1 ) Des d (π 1 ) d. Moreover, the exact same reasoning shows that when π id, I 1, we get Des d ((π ) 1 ) Des d (π 1 ) d, so this is the best bound possible. Therefore, we have the tight bound W n,d (πσ I ) W n,d (π) and the result follows from Theorem 1. 1 Desd (σ 1 I π 1 ) Des d (π 1 ) d Var(Yn,d ) Var(Y n,d ) As the bound in Lemma is pretty unwieldy, we record the following corollary. Corollary. W n,d W n,d C 1 d 1 n 1, C d 1, d(n) n n < d(n) n C 3 n 1, d(n) > n C(d)n 1, d fixed where C 1, C, C 3, and C(d) are universal constants independent of n. Moreover, these bounds are of the best possible order. Proof. When d n, noting that 6 6 6nd 1 +7d (6n 1)d d nd, we see that if 1 +d d d(n) n, we can take A C d with C 3. When d d n, Θ(dn 1 ), n+d n+d so W n,d W n,d C1 d 1 n 1 for some constant C 1 and this is the best possible order. When n < d n, d Θ(d 1 ), so n+d W n,d W n,d C d 1 for some constant C and this is the best possible order. (Note that A C d n+d is maximized when d n, in which case A Θ(n 1 ).) Now a little calculus shows that Var(Y n,d ) increases monotonically with d for n d < n, so n3 < n3 +15n n Var(Y 1 88 n,d ) d n < d Var(Y Var(Y n,d ) n,d) dn 1 n3 +3n 5n < n Accordingly, for n < d < n, we have 3n 1 n 1 < d < n 1 n 3 V ar(yn,d ) n 1n 1. 3 Therefore, when n < d < n, W n,d W n,d C 3 n 1 for some constant C 3 and this is the best possible order. d Finally, for fixed d, the preceding analysis shows that Var(Yn,d ) Θ(n 1 ), so the best possible bound is W n,d W n,d C(d)n 1 for some constant C(d) that does not depend on n. the electronic journal of combinatorics 18 (011), #P36 9

10 In order to apply Theorem 3, it remains only to bound Var(E[(W n,d W n,d) W n,d ]). To accomplish this task, we first establish a simplifying lemma which can be found in [6], but whose proof is included for the sake of completeness. Lemma 3. Var(E[(W n,d W n,d ) W n,d ]) Var(E[(W n,d W n,d ) π]) Proof. The conditional version of Jensen s inequality states that if ϕ is convex and E[ X ], E[ ϕ(x) ] are finite, then ϕ(e[x F]) E[ϕ(X) F] (see section.1 in [5]). Taking expectations gives E[ϕ(E[X F])] E[E[ϕ(X) F]] E[ϕ(X)]. Letting X E[(W n,d W n,d) π], ϕ(x) x, and F σ(w n,d ) σ(π), we see that E[E[(W n,d W n,d ) W n,d ] ] E[E[E[(W n,d W n,d ) π] W n,d ] ] E[E[(W n,d W n,d ) π] ]. It follows that Var(E[(W n,d W n,d ) W n,d ]) E[E[(W n,d W n,d ) W n,d ] ] E[(W n,d W n,d ) ] E[E[(W n,d W n,d ) π] ] E[(W n,d W n,d ) ] Var(E[(W n,d W n,d ) π]). The above lemma and some simplifying observations imply Lemma. K 1 d n 3, d(n) n Var(E[(W n,d W n,d ) K d n 1, n < d(n) n W n,d ]) K 3 n 3, d(n) > n K(d)n 3, d fixed where K 1, K, K 3, and K(d) are universal constants which do not depend on n. the electronic journal of combinatorics 18 (011), #P36 10

11 Proof. We see from the proof of Lemma 1 that E[(W n,d W n,d ) π] 1 n [W n,d (πσ i ) W n,d (π)] n i1 ( ) n M π(i),π(j) Var(Z n,d ) n i1 i<j ( ) n M π(i),π(j) Var(Y n,d ) n 1 nvar(y n,d ) 1 nvar(y n,d ) i1 ( n 1 i1 i<j ) n 1 Mπ(i),π(j) + M π(i),π(j) M π(i),π(k) i<j i<j i<j<k i1 i<j<k ( Mπ(i),π(j) + ) M π(i),π(j) M π(i),π(k). The reasoning used in establishing the distribution of Z n,d shows that Mπ(i),π(j) {(i, j) [n] : i < j, π(i) π(j) d N n,d i<j for all π S n, so, writing σ n,d Var(Y n,d ), it follows from Lemma 3 that Var(E[(W n,d W n,d ) W n,d ]) Var(E[(W n,d W n,d ) π]) ( ( 1 Var Mπ(i),π(j) + )) M π(i),π(j) M π(i),π(k) nσ n,d i<j ( Var n σn,d i<j<k i<j<k M π(i),π(j) M π(i),π(k) ) Cov(M π(i1),π(j1)m π(i1),π(k1), M π(i),π(j)m π(i),π(k)). n σn,d i<j 1 <k 1 i <j <k Now for all (i 1, j 1 ), (i, j ), M π(i1 ),π(j 1 )M π(i1 ),π(k 1 ) and M π(i ),π(j )M π(i ),π(k ) have common law, so E[M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )]E[M π(i ),π(j )M π(i ),π(k )] 0, hence Cov(M π(i1 ),π(j 1 )M π(i1 ),π(k 1 ), M π(i ),π(j )M π(i ),π(k ) E[M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )M π(i ),π(j )M π(i ),π(k )] P (M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )M π(i ),π(j )M π(i ),π(k ) 0). the electronic journal of combinatorics 18 (011), #P36 11

12 Since M π(i1 ),π(j 1 )M π(i1 ),π(k 1 ) and M π(i ),π(j )M π(i ),π(k ) are independent when {i 1, j 1, k 1 } {i, j, k }, it follows that Var(E[(W n,d W n,d) W n,d ]) is bounded above by P (M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )M π(i ),π(j )M π(i ),π(k ) 0). n σ n,d i 1 <j 1 <k 1 i <j <k {i 1,j 1,k 1 } {i,j,k } We first observe that when d > n, the proof of Corollary 1 shows that n3 < 1 σ n,d, so, since the above sum contains O(n 5 ) terms, all of which are at most 1, Var(E[(W n,d W n,d ) W n,d ]) K 3 for some constant K n 3 3. As such, we need only worry about the summands for the d n case. Here we note that the sum can be broken up according to the nature of the intersection {i 1, j 1, k 1 } {i, j, k } so that P (M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )M π(i ),π(j )M π(i ),π(k ) 0) is constant on each of these sets. For example, there are ( n 5) terms with i1 i and {j 1, k 1 } {j, k }, and for each such term, the event that the product of the corresponding matrix entries is nonzero is equal to the event that within a random row of M M(n, d), four off-diagonal entries are chosen uniformly without replacement and are all nonzero. Letting A i be the number of nonzero off-diagonal entries in row i of M, we have A i min{i 1, d} + min{d, n i}, so P (M π(i1 ),π(j 1 )M π(i1 ),π(k 1 )M π(i ),π(j )M π(i ),π(k ) 0) 1 n ( )( ) 1 Ai n 1 (n 5)! n A i! n n! (A i1 i1 i )! [ d (n 5)! (i + d 1)! n! (i + d )! + (n d) (d)! n (d )! + i1 in d+1 ] (n + d i)! (n + d i )! 90nd 0nd 3 + 0nd 60nd 98d d + 50d 3 180d + 8d 5n(n 1)(n )(n 3)(n ) O(n d ). The total contribution of such terms is thus O(nd ). The other cases may be handled similarly and the summands are still seen to contribute no more than O(nd ). Now for d n, σn,d O(nd), so Var(E[(W n,d W n,d) W n,d ]) O(nd ) n σn,d K 1 d n 3 for some constant K 1. When (n) < d n, σ n,d O(d3 ), so that Var(E[(W n,d W n,d ) W n,d ]) K d n 1 for some constant K. Finally, since σn,d nd, thus Var(E[(W 1 n,d W n,d) W n,d ]) O(nd ) O(d n 3 ) n σn,d when d n, and Var(E[(W n,d W n,d) W n,d ]) K 3 n 3 when d > n, for all fixed values of d, we have Var(E[(W n,d W n,d) W n,d ]) K(d)n 3 for some constant K(d) that does not depend on n. We are now in a position to bound the rate of convergence of d-descents with the electronic journal of combinatorics 18 (011), #P36 1

13 Theorem. The number of d-descents in a random permutation of length n satisfies ( P Desd µ n,d σ n,d ) x M 1 d 3 n 1, Φ(x) M nd 3, M 3 n 1, M(d)n 1, d(n) (n) (n) < d(n) n d(n) > n d fixed where Φ is the standard normal c.d.f., M 1, M, M 3, and M(d) are constants which do not depend on n, µ n,d nd d d, and σ n,d Var(Y n,d ) is given by Theorem 1. Proof. Apply Theorem 3 to the pair (W n,d, W n,d ) - recalling that W n,d is distributed as Y n,d nd d d Var(Yn,d ) Des d µ n,d σ n,d Lemma 1, Corollary 1, and Lemma, respectively. - where λ, A, and Var(E[(W n,d W n,d) W n,d ]) are given by Examination of the d(n) n cases shows that this method does not yield useful rates when d(n) Ω(n 1 3 ) O(n 3 ) : In order that the statement is not completely vacuous, the rates must be o(1). Bóna s paper shows that a central limit theorem holds in these cases, but the rates remain unknown. In each case, the 1 Var(E[(W λ n,d W n,d) W n,d ]) term and the A λ term are of the same order, so, since the bounds on A were tight, the bounds from Lemma are sufficient for getting the best possible order using the pair (W n,d, W n,d ). The largest term in each case comes from A3. λ Conclusion For the most part, we were able to successfully extend the work of Fulman in [6] to the more general setting of d-descents. In particular, Theorem shows that the distribution of d-descents in a random permutation converges to the normal distribution on the order of 1 n when d is fixed or d(n) Θ(n). When d(n) o(n), we still get a central limit theorem with an error term unless d(n) Ω(n 1 3 ) O(n 3 ). In this case, we need to appeal to Bóna s work using Janson s criterion to get asymptotic normality and the rate is unknown. The d(n) problem in this regime is that has a spike around d(n) n. The only way Var(Yn,d ) around this obstacle using the methods of this paper would be to find another pair of random variables related to Des d that satisfy the assumptions of Theorem 3 and yield a larger value of λ and/or a smaller value of A. The author has been unable to find such a pair and at present it is not clear whether the rates for the d(n) n cases derived in this paper are artifacts of the methodology or a true reflection of the actual dependence of the the electronic journal of combinatorics 18 (011), #P36 13

14 rates on the growth of d(n). The bounds in the d(n) Ω(n 1 3 ) O(n 3 ) cases are certainly not optimal and it would not be surprising to learn that an order O(n 1 ) convergence rate holds for all choices of d(n), though it seems that another approach is needed to determine whether this is the case. Beyond broadening the known results on the distribution of d-descents of a random permutation and establishing a relationship between these statistics and certain eigenfunctions of the random-to-end shuffle, this paper has demonstrated a method that may be useful for getting rates in other cases involving real-valued statistics defined on groups. For example, the construction of the pair (W, W ) outlined in the remark following Lemma 1 may be relevant to some statistics involving metrics on groups. Indeed, Des n 1 (π) Inv(π) τ(π) is the Kendall tau metric (see [3], Ch. 6). It might also yield central limit theorems for statistics concerning group representations. In fact, as mentioned in the discussion following Lemma 1, the method applies more generally to the study of eigenfunctions (corresponding to real eigenvalues) of ergodic Markov chains in general so there are many potential applications of the technique. Finally, the preceding analysis shows that Röllin s theorem can greatly simplify many arguments using Stein s method by removing the condition of exchangeability and thus extends the reach of this powerful tool. Acknowledgement The author would like to thank Jason Fulman for suggesting the problem and proposing the general plan of attack. This paper would not have been written if not for his advice and encouragement. Thanks also to the anonymous referee for their helpful comments and suggestions. In particular, the calculation relating Z n,d to Des d is much nicer because of their input. References [1] M. Bóna, Generalized descents and normality. Electronic Journal of Combinatorics 15 (008), no. 1. [] F. De Mari, M.A. Shayman, Generalized Eulerian numbers and the topology of the Hessenberg variety of a matrix. Acta Appl. Math. 1 (1988), no. 3, [3] P. Diaconis, Group representations in probability and statistics, Institute of Mathematical Statistics Lecture Notes, 11, [] P. Diaconis, Random walks on groups: Characters and geometry, Groups St. Andrews 001 in Oxford, vol. 1, Cambridge University Press, 000, pp [5] R. Durrett, Probability: Theory and Examples, Ed. 3, Brooks/Cole Publishing Company, 005. the electronic journal of combinatorics 18 (011), #P36 1

15 [6] J. Fulman, Stein s method and non-reversible Markov chains. Stein s Method: Expository Lectures and Applications. IMS Lecture Notes - Monogr. Ser., Vol [7] D. Levin, Y. Peres, and E. Wilmer, Markov Chains and Mixing Times, American Mathematical Society, 009. [8] Y. Rinott, and V. Rotar, On coupling constructions with rates in the CLT for dependent summands with applications to the antivoter model and weighted U-statistics, Ann. Appl. Probab. 7 (1997), [9] A. Röllin, A note on the exchangeability condition in stein s method, Statistics and Probability Letters, 78 (008), [10] C. Stein, Approximate computation of expectations, Institute of Mathematical Statistics Lecture Notes, 7, the electronic journal of combinatorics 18 (011), #P36 15

arxiv: v1 [math.co] 17 Dec 2007

arxiv: v1 [math.co] 17 Dec 2007 arxiv:07.79v [math.co] 7 Dec 007 The copies of any permutation pattern are asymptotically normal Milós Bóna Department of Mathematics University of Florida Gainesville FL 36-805 bona@math.ufl.edu Abstract

More information

A NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH. John Pike

A NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH. John Pike A NOTE ON THE POINCARÉ AND CHEEGER INEQUALITIES FOR SIMPLE RANDOM WALK ON A CONNECTED GRAPH John Pike Department of Mathematics University of Southern California jpike@usc.edu Abstract. In 1991, Persi

More information

A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS

A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS SOURAV CHATTERJEE AND PERSI DIACONIS Abstract. This paper does three things: It proves a central limit theorem for novel permutation statistics

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS

A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS A CENTRAL LIMIT THEOREM FOR A NEW STATISTIC ON PERMUTATIONS SOURAV CHATTERJEE AND PERSI DIACONIS Abstract. This paper does three things: It proves a central limit theorem for a novel permutation statistic,

More information

New lower bounds for hypergraph Ramsey numbers

New lower bounds for hypergraph Ramsey numbers New lower bounds for hypergraph Ramsey numbers Dhruv Mubayi Andrew Suk Abstract The Ramsey number r k (s, n) is the minimum N such that for every red-blue coloring of the k-tuples of {1,..., N}, there

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1+o(1))2 ( 1)/2.

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

Short cycles in random regular graphs

Short cycles in random regular graphs Short cycles in random regular graphs Brendan D. McKay Department of Computer Science Australian National University Canberra ACT 0200, Australia bdm@cs.anu.ed.au Nicholas C. Wormald and Beata Wysocka

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

On Descents in Standard Young Tableaux arxiv:math/ v1 [math.co] 3 Jul 2000

On Descents in Standard Young Tableaux arxiv:math/ v1 [math.co] 3 Jul 2000 On Descents in Standard Young Tableaux arxiv:math/0007016v1 [math.co] 3 Jul 000 Peter A. Hästö Department of Mathematics, University of Helsinki, P.O. Box 4, 00014, Helsinki, Finland, E-mail: peter.hasto@helsinki.fi.

More information

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA

Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals. Satoshi AOKI and Akimichi TAKEMURA Minimal basis for connected Markov chain over 3 3 K contingency tables with fixed two-dimensional marginals Satoshi AOKI and Akimichi TAKEMURA Graduate School of Information Science and Technology University

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

arxiv:math/ v1 [math.ra] 18 Mar 2004

arxiv:math/ v1 [math.ra] 18 Mar 2004 arxiv:math/0403314v1 [math.ra] 18 Mar 2004 WEIGHT AND RANK OF MATRICES OVER FINITE FIELDS THERESA MIGLER, KENT E. MORRISON, AND MITCHELL OGLE Abstract. Define the weight of a matrix to be the number of

More information

Introduction to Arithmetic Geometry Fall 2013 Lecture #18 11/07/2013

Introduction to Arithmetic Geometry Fall 2013 Lecture #18 11/07/2013 18.782 Introduction to Arithmetic Geometry Fall 2013 Lecture #18 11/07/2013 As usual, all the rings we consider are commutative rings with an identity element. 18.1 Regular local rings Consider a local

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Tactical Decompositions of Steiner Systems and Orbits of Projective Groups

Tactical Decompositions of Steiner Systems and Orbits of Projective Groups Journal of Algebraic Combinatorics 12 (2000), 123 130 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Tactical Decompositions of Steiner Systems and Orbits of Projective Groups KELDON

More information

Introduction to Probability

Introduction to Probability LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

Inversions within restricted fillings of Young tableaux

Inversions within restricted fillings of Young tableaux Inversions within restricted fillings of Young tableaux Sarah Iveson Department of Mathematics University of California, Berkeley, 970 Evans Hall Berkeley, CA 94720-3840 siveson@math.berkeley.edu Submitted:

More information

arxiv: v1 [math.co] 22 May 2014

arxiv: v1 [math.co] 22 May 2014 Using recurrence relations to count certain elements in symmetric groups arxiv:1405.5620v1 [math.co] 22 May 2014 S.P. GLASBY Abstract. We use the fact that certain cosets of the stabilizer of points are

More information

The initial involution patterns of permutations

The initial involution patterns of permutations The initial involution patterns of permutations Dongsu Kim Department of Mathematics Korea Advanced Institute of Science and Technology Daejeon 305-701, Korea dskim@math.kaist.ac.kr and Jang Soo Kim Department

More information

A simple branching process approach to the phase transition in G n,p

A simple branching process approach to the phase transition in G n,p A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

THE LECTURE HALL PARALLELEPIPED

THE LECTURE HALL PARALLELEPIPED THE LECTURE HALL PARALLELEPIPED FU LIU AND RICHARD P. STANLEY Abstract. The s-lecture hall polytopes P s are a class of integer polytopes defined by Savage and Schuster which are closely related to the

More information

Zeros of Generalized Eulerian Polynomials

Zeros of Generalized Eulerian Polynomials Zeros of Generalized Eulerian Polynomials Carla D. Savage 1 Mirkó Visontai 2 1 Department of Computer Science North Carolina State University 2 Google Inc (work done while at UPenn & KTH). Geometric and

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

Bipartite decomposition of random graphs

Bipartite decomposition of random graphs Bipartite decomposition of random graphs Noga Alon Abstract For a graph G = (V, E, let τ(g denote the minimum number of pairwise edge disjoint complete bipartite subgraphs of G so that each edge of G belongs

More information

G(t) := i. G(t) = 1 + e λut (1) u=2

G(t) := i. G(t) = 1 + e λut (1) u=2 Note: a conjectured compactification of some finite reversible MCs There are two established theories which concern different aspects of the behavior of finite state Markov chains as the size of the state

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

Notes 18 : Optional Sampling Theorem

Notes 18 : Optional Sampling Theorem Notes 18 : Optional Sampling Theorem Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Chapter 14], [Dur10, Section 5.7]. Recall: DEF 18.1 (Uniform Integrability) A collection

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Small subgraphs of random regular graphs

Small subgraphs of random regular graphs Discrete Mathematics 307 (2007 1961 1967 Note Small subgraphs of random regular graphs Jeong Han Kim a,b, Benny Sudakov c,1,vanvu d,2 a Theory Group, Microsoft Research, Redmond, WA 98052, USA b Department

More information

EIGENVECTORS FOR A RANDOM WALK ON A LEFT-REGULAR BAND

EIGENVECTORS FOR A RANDOM WALK ON A LEFT-REGULAR BAND EIGENVECTORS FOR A RANDOM WALK ON A LEFT-REGULAR BAND FRANCO SALIOLA Abstract. We present a simple construction of the eigenvectors for the transition matrices of random walks on a class of semigroups

More information

Notes on Poisson Approximation

Notes on Poisson Approximation Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Optimisation Problems for the Determinant of a Sum of 3 3 Matrices

Optimisation Problems for the Determinant of a Sum of 3 3 Matrices Irish Math. Soc. Bulletin 62 (2008), 31 36 31 Optimisation Problems for the Determinant of a Sum of 3 3 Matrices FINBARR HOLLAND Abstract. Given a pair of positive definite 3 3 matrices A, B, the maximum

More information

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE HASAN AKIN Abstract. The purpose of this short paper is to compute the measure-theoretic entropy of the onedimensional

More information

Packing Ferrers Shapes

Packing Ferrers Shapes Packing Ferrers Shapes Noga Alon Miklós Bóna Joel Spencer February 22, 2002 Abstract Answering a question of Wilf, we show that if n is sufficiently large, then one cannot cover an n p(n) rectangle using

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Sampling Contingency Tables

Sampling Contingency Tables Sampling Contingency Tables Martin Dyer Ravi Kannan John Mount February 3, 995 Introduction Given positive integers and, let be the set of arrays with nonnegative integer entries and row sums respectively

More information

Section 9.1. Expected Values of Sums

Section 9.1. Expected Values of Sums Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

Mod-φ convergence II: dependency graphs

Mod-φ convergence II: dependency graphs Mod-φ convergence II: dependency graphs Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi, Lago Maggiore,

More information

Compression in the Space of Permutations

Compression in the Space of Permutations Compression in the Space of Permutations Da Wang Arya Mazumdar Gregory Wornell EECS Dept. ECE Dept. Massachusetts Inst. of Technology Cambridge, MA 02139, USA {dawang,gww}@mit.edu University of Minnesota

More information

SZEMERÉDI S REGULARITY LEMMA FOR MATRICES AND SPARSE GRAPHS

SZEMERÉDI S REGULARITY LEMMA FOR MATRICES AND SPARSE GRAPHS SZEMERÉDI S REGULARITY LEMMA FOR MATRICES AND SPARSE GRAPHS ALEXANDER SCOTT Abstract. Szemerédi s Regularity Lemma is an important tool for analyzing the structure of dense graphs. There are versions of

More information

Oberwolfach workshop on Combinatorics and Probability

Oberwolfach workshop on Combinatorics and Probability Apr 2009 Oberwolfach workshop on Combinatorics and Probability 1 Describes a sharp transition in the convergence of finite ergodic Markov chains to stationarity. Steady convergence it takes a while to

More information

The Game of Normal Numbers

The Game of Normal Numbers The Game of Normal Numbers Ehud Lehrer September 4, 2003 Abstract We introduce a two-player game where at each period one player, say, Player 2, chooses a distribution and the other player, Player 1, a

More information

Off-diagonal hypergraph Ramsey numbers

Off-diagonal hypergraph Ramsey numbers Off-diagonal hypergraph Ramsey numbers Dhruv Mubayi Andrew Suk Abstract The Ramsey number r k (s, n) is the minimum such that every red-blue coloring of the k- subsets of {1,..., } contains a red set of

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013. Conditional expectations, filtration and martingales MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales

More information

The Free Central Limit Theorem: A Combinatorial Approach

The Free Central Limit Theorem: A Combinatorial Approach The Free Central Limit Theorem: A Combinatorial Approach by Dennis Stauffer A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honour s Seminar)

More information

Exchangeable Bernoulli Random Variables And Bayes Postulate

Exchangeable Bernoulli Random Variables And Bayes Postulate Exchangeable Bernoulli Random Variables And Bayes Postulate Moulinath Banerjee and Thomas Richardson September 30, 20 Abstract We discuss the implications of Bayes postulate in the setting of exchangeable

More information

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1

A = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1 Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Vector, Matrix, and Tensor Derivatives

Vector, Matrix, and Tensor Derivatives Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions

More information

µ n 1 (v )z n P (v, )

µ n 1 (v )z n P (v, ) Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic

More information

MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT

MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT Statistica Sinica 23 (2013), 1523-1540 doi:http://dx.doi.org/10.5705/ss.2012.203s MODERATE DEVIATIONS IN POISSON APPROXIMATION: A FIRST ATTEMPT Louis H. Y. Chen 1, Xiao Fang 1,2 and Qi-Man Shao 3 1 National

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP

YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YOUNG TABLEAUX AND THE REPRESENTATIONS OF THE SYMMETRIC GROUP YUFEI ZHAO ABSTRACT We explore an intimate connection between Young tableaux and representations of the symmetric group We describe the construction

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

Optimal Sequential Selection of a Unimodal Subsequence of a Random Sequence

Optimal Sequential Selection of a Unimodal Subsequence of a Random Sequence University of Pennsylvania ScholarlyCommons Operations, Information and Decisions Papers Wharton Faculty Research 11-2011 Optimal Sequential Selection of a Unimodal Subsequence of a Random Sequence Alessandro

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Carries, Shuffling, and An Amazing Matrix

Carries, Shuffling, and An Amazing Matrix Carries, Shuffling, and An Amazing Matrix Persi Diaconis Jason Fulman Submitted on 6/7/08; minor revisions on 9/3/08 Abstract The number of carries when n random integers are added forms a Markov chain

More information

An improved tableau criterion for Bruhat order

An improved tableau criterion for Bruhat order An improved tableau criterion for Bruhat order Anders Björner and Francesco Brenti Matematiska Institutionen Kungl. Tekniska Högskolan S-100 44 Stockholm, Sweden bjorner@math.kth.se Dipartimento di Matematica

More information

Lecture 1 Measure concentration

Lecture 1 Measure concentration CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples

More information

Lecture 7: February 6

Lecture 7: February 6 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

arxiv:math/ v5 [math.ac] 17 Sep 2009

arxiv:math/ v5 [math.ac] 17 Sep 2009 On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

More information

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Increments of Random Partitions

Increments of Random Partitions Increments of Random Partitions Şerban Nacu January 2, 2004 Abstract For any partition of {1, 2,...,n} we define its increments X i, 1 i n by X i =1ifi is the smallest element in the partition block that

More information

REVERSALS ON SFT S. 1. Introduction and preliminaries

REVERSALS ON SFT S. 1. Introduction and preliminaries Trends in Mathematics Information Center for Mathematical Sciences Volume 7, Number 2, December, 2004, Pages 119 125 REVERSALS ON SFT S JUNGSEOB LEE Abstract. Reversals of topological dynamical systems

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

On the mean connected induced subgraph order of cographs

On the mean connected induced subgraph order of cographs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 71(1) (018), Pages 161 183 On the mean connected induced subgraph order of cographs Matthew E Kroeker Lucas Mol Ortrud R Oellermann University of Winnipeg Winnipeg,

More information

Approximating sparse binary matrices in the cut-norm

Approximating sparse binary matrices in the cut-norm Approximating sparse binary matrices in the cut-norm Noga Alon Abstract The cut-norm A C of a real matrix A = (a ij ) i R,j S is the maximum, over all I R, J S of the quantity i I,j J a ij. We show that

More information

Asymptotic efficiency of simple decisions for the compound decision problem

Asymptotic efficiency of simple decisions for the compound decision problem Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com

More information

CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK

CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK JONATHAN NOVAK AND ARIEL SCHVARTZMAN Abstract. In this note, we give a complete proof of the cutoff phenomenon for the star transposition random walk on the

More information

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes

Linear Dependence of Stationary Distributions in Ergodic Markov Decision Processes Linear ependence of Stationary istributions in Ergodic Markov ecision Processes Ronald Ortner epartment Mathematik und Informationstechnologie, Montanuniversität Leoben Abstract In ergodic MPs we consider

More information

University of Regina. Lecture Notes. Michael Kozdron

University of Regina. Lecture Notes. Michael Kozdron University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Random Walks Conditioned to Stay Positive

Random Walks Conditioned to Stay Positive 1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

Combinatorial Method in the Coset Enumeration. of Symmetrically Generated Groups II: Monomial Modular Representations

Combinatorial Method in the Coset Enumeration. of Symmetrically Generated Groups II: Monomial Modular Representations International Journal of Algebra, Vol. 1, 2007, no. 11, 505-518 Combinatorial Method in the Coset Enumeration of Symmetrically Generated Groups II: Monomial Modular Representations Mohamed Sayed Department

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information