AMERICAN MATHEMATICAL SOCIETY Volume 73, Number 1, January 1979 SLICING THE CUBE IN R" AND PROBABILITY (BOUNDS FOR THE MEASURE OF A CENTRAL CUBE SLICE IN R" BY PROBABILITY METHODS) DOUGLAS HENSLEY Abstract. A cube of dimension n and side 1 is cut by a hyperplane of dimension n - 1 through its center. The usual n - 1 measure of the intersection is bounded between 1 and M, independent of n. The proof uses an inequality for sums of independent random variables. 1. Introduction. If a square or a cube of side 1 is cut by a line (resp. plane) through its center then the length (area) of the intersection is > 1. But even for the cube this is not obvious. Motivated perhaps by these examples, Dr. Anton Good conjectured that if a cube of dimension n were sliced by an n - 1 dimensional hyperplane through its center the usual n 1 measure of the set of intersection would always be > 1. We proved this and an analogous upper bound. A curious feature of the solution to this straightforward problem of geometry is that it is largely based on probabilistic considerations. Let us then restate the problem as a question in probability. The original version is this: (1) Let C = [- j, jf ÇR". Let B be an n - 1 dimensional subspace of R". Let m denote the usual measure on B. Then is m(b n C) > 1? If some component of the vector normal to P is zero the problem reduces to one in a lower dimension. So suppose that no component of that normal vector v is zero. Then after reindexing and redirecting the axis we can write (2) v = [vx, v2,..., v ] with vx > v2 > > vn = 1. Then B n C = ((x x2,..., x ): xxvx + x2v2 + + xnvn = 0 and \xk\ < 5, 1 < k < n}. For 1 < k < n 1 let Xk be a random variable, independent of the others, having constant density vk] on [( {-)vk, (f )uj. The projection of P n C onto xn = 0 is ((x,, x2,..., x _,): IS^'iX^o^l < \ }. Its measure equals Prob (122 = 1^1 < \). Thus m(b n C) = v Prob( S^ < ). This problem is therefore equivalent to (3) With v, Xk as defined above, is v Prob( 2A^ < \) > 1? Remark. In a discussion of this problem at the Institute for Advanced Study I opined that taking v = [1, 1,..., 1] and n large gave a counterexample. Professor Selberg put the problem in its probabilistic form and used Received by the editors November 18, 1977 and, in revised form, April 27, 1978. AMS (MOS) subject classifications (1970). Primary 52A20, 52A40, 60E05, 60D05. Key words and phrases. Thickness of a random variable, hyperplane slicing a cube. 95 0002-9939/79/0O00-0O2O/$02.S0 1979 American Mathematical Society
96 DOUGLAS HENSLEY the central limit theorem to show that for my purported counterexample, m(b n C) ~~ (6/tî-)1/2 as «> oo. This was a surprise to both of us as neither had expected a value close to 1. 2. An inequality in probability. Suppose / is a probability density function of a random variable A'. If / is nondecreasing on [ oo, 0] and symmetric about 0 we say X is a good random variable. Definition. If X and Y are independent and good random variables and if for all a > 0, Prob^l < a) > Prob( Y\ < a) then X is more peaked than Y and we write X < Y. Theorem 1 (Z. W. Birnbaum [1]). Suppose X, Y, Z, and Z' are independent good random variables, X < Y, and Z and Z' are identically distributed. Then X + Z and Y + Z' are good, are independent, and X + Z < Y + Z'. The immediate corollaries & and below are the ones we shall use later. Corollary &. If Xx, X2, Yx and Y2 are independent and good, Xx < Yx, X2 < Y2 then Xx + X2 < Yx + Y2, the sums are independent and both sums are good. Corollary %. If Xx, X2,..., Xn_x, Yx, Y2,..., Yn_x are independent, all are good, if for 1 < k < n - 1, Xk < Yk, and if XQ = 2"k'JxXk and Y0 = 2".~', Yk, then X0 and Y0 are good and X0 -< Y0. Theorem 1 has since been generalized [4].1 3. Application to cube slicing. Theorem 2. Suppose vx > v2 > v = 1 and suppose Xk, 1 < k < «1, are independent random variables uniformly distributed on [ j vk, \ vk]. Let v = [u v2,..., vn] and let X0 = 2^~ \Xk. Then v Prob( ^0 <i)>l. (8) Proof. Let Yk, 1 </<<«- 1, be such that {Xk} u {Yk) are independent and each 1^ has distribution 91(0, ok) where ok = (2tr)~x/2vk. Let Y0 = 1"k~=\ Yk. Then all Xk and Yk are good and for each k, Xk < Yk. Thus by Corollary <3>,X0< Y0. Now Y0 is 9L(0, a0) where a2, = *L"k\\o2k. Thus Prob ( T0 < \ ) equals Let u = [vx, v2,...,» _,]. Then (2^y]/2o0-xfW2cxp(-{x2/o2)dx. (9) '-1/2 v Prob( y0 <i) = ( v / u )f1/2exp(-77x2/ u 2)<ix. (10) ' 1/2 'Also M. Kanter, Unimodality and dominance for symmetric random vectors, Trans. Amer. Math. Soc. 229 (1977), 65-85.
SLICING THE CUBE IN R AND PROBABILITY 97 Since the case n = 2 is trivial we assume here that n > 3 so that u 2 > 2. Let 1 + s = v 2/ u 2. Then s = \u\~2 and the expression in (10) equals (1 + s),/2 f1/2 exp(-vsx2) dx= H(s), say. (11) '-1/2 Then H(0) = 1 so we can complete the proof by showing dh/ds > 0 for 0 < s < \. We have dh/ds = (1 + s)~l/2 f1/2 (1-2(1 + í)77x2)exp(-7tíx2) dx. (12) '-1/2 By parts this is 2(x - (2/3)(l + i)77x3)exp(-7tjx2) 0/2 'A, + 2 (1/22itsx(x 'o - (2/3)(l + i)77x3)exp(-tox2) dx. (13) Both terms are greater than zero by inspection. In terms of the original problem (1), Theorem 2 reads as follows. Theorem 2'. The usual measure of B n C is > 1 and equality occurs only if B is parallel to some face of C. These results do not generalize in any way that is obvious to me to the more general: Conjecture (A. Good). If C is [, \f and P is a vector subspace of R" with dimension d < n then the usual measure m of B n C is > l.2 4. Upper bounds. A hyperplane not through the origin may miss the cube C altogether. Thus for nontrivial lower bounds we had to restrict ourselves to central slices. We now drop that restriction. Theorem 3. If B' is a hyperplane,parallel to B through 0, then m(b' n C) < m(b n C). Proof. Let v = (vx, v2,..., v ) be a vector normal to P. Without loss of generality we may take v = 1 and all vk > 0 for 1 < k < n. Now let Xk be independent random variables each uniformly distributed on [ {-vk, \vk]. Then 2^~ \Xk is a good random variable, so for all a, Prob n-x < Prob n-\ lxk 1 On multiplying both sides of the above by v we have the desired inequality. D Remark. Thus we need only find upper bounds for m(b n C) where B is an (n - 1) dimensional hyperplane through the origin. 2But J. Vaaler has recently proved this conjecture.
98 DOUGLAS HENSLEY Theorem 4. There exists M > 0 such that for any dimension n, any B of dimension «1, m(b n C) < M. M may be taken to be 5. Proof. Suppose v = [vx, v2,..., vn] is a vector normal to B, and this time normalize, reindex, and redirect so that vx = 1 > v2 > > vn > 0. Then with Xk, 2 < k < n, as usual, m(b n C) = v Prob( 22** < j)- Let X = Before proceeding let us agree to take M > 5. Now if v 2 < M then we are done as the probability in question is no greater than one. Thus we can assume v 2 > M2 and consequently «> A/2. Now 1/2?Tob(\X\<{-)= (X/2 f(s)ds< f(0), '-1/7 1/2 where/ is the density of X. Let L be the Fourier transform operator Lg(t) = (27t)~x/2fCC J on g(x)e-ix'dx. Let f2, />,...,/ be the density functions of X2, X3.X respectively. Then for 2 < k < «, Let "*" denote convolution, so that 44(0 = (2<nyi/2(sin(x2vkt))/(x2vkt). (14) g * h(x) = 0)-1/2 f g(x - y)h(y) dy. '-oo Then the density of a sum of two independent random variables is (27t)1/2 times the convolution of their densities. Thus by Fourier theory and, in particular, f(x) = (2tr)-x/2r Lf(t)eix'dt (15) '-oo Therefore we can prove Theorem 4 by showing that /oo " ^(sin(\vkt))/(\vkt)dt. (16) -00^ = 2 M/00 Í(«n(iv))/(W* (I7) is less than IO77. To do this we divide the range of integration into several pieces and bound the contribution to the total of each piece. The first interval is [0, 4/t>2]. Subsequent intervals are of the form [4/vk, 4/vk+x] and the last interval is [4/vk,, 00] where 2 < A:* < «is still to be determined. Let K =. Then (sin u)/u < exp( Ku2) for 0 < u < 2, and of course (sin u)/u <\ for u > 2. On [0, 4/u2] the integrand in (17) is less then exp(- Kt2(\v\2-1)) so the
SLICING THE CUBE IN r" AND PROBABILITY 99 first piece contributes to (17) less than o -i-(277)1/2vt2-( t; 2-l)",/2<5. On the intervals [4/vk, 4/vk+x] the integrand of (17) is less than MO "H \ A exp(-\kv2t2]. (18) y-2 L j = k + X v H ' Let r2 = 2;.A+I o/. Then Pfc(/) = (j)*-'exp(-â>2i2). For < > 4/c*, t2 > 42/o2 + 8(, - 4/vk)/vk. Thus (//* W < (\ Î 'exp( " f rí/v ) ' 3Vk/4rl Let A:* be the least k such that rkl > (j)rj~}v Then for k < k*, r\ > \v2k, while r\. < j vk.. Thus for k < k*, r pk(t)dt<(\]k '.i. (i9) Now (j)*" V < \(\)k~2rk}x for A: < k*, so that the sum for k < k* of the bounds of (19) is less than 4 r2" ' < 5 u ~ ', and the total contribution to (17) from [-4/vk 4/vk.] is less than 2(5 + 5) = 20. The integrand of (17) on [4/vk., oo] is less than 2**-1 n «/'i'"'". y-2 Now by the definition of A:*, f*2--. < 4»I«+ 4 I 2 o/j < 4»*2- By induction we have similarly 4-m < ( f )»*2., (20) so that on summing v 2 < (f )**«*. and vk. > (2/3)** v. The contribution to (17) from [4/vk., oo] is thus less than \v\(k*-2)-l42-k'2k'-\-.i! I{vk.vr>) y-2 < ( fvk.(k* - 2)-'42-^-v.1 S ( f )**"< i. Thus in all, the expression in (17) is less than 22, which is less than \0-n. Remark. I conjecture that M = y 2 is possible in Theorem 4. If so, it is clearly the best possible.
100 DOUGLAS HENSLEY Bibliography 1. Z. W. Birnbaum, On random variables with comparable peakedness, Ann. Math. Statist. 19 (1948), 76-81. 2. L. Breiman, Probability, Addison-Wesley, Reading, Mass., 1968, Chapter 3. 3. W. Rudin, Real and complex analysis, McGraw-Hill, New York, 1966, p. 140. 4. S. Sherman, A theorem on convex sets with applications, Ann. Math. Statist. 26 (1955), 763-766. Department of Mathematics, Texas A & M University, College Station, Texas 77843