Necessity of low effective dimension

Size: px
Start display at page:

Download "Necessity of low effective dimension"

Transcription

1 Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that are nearly superpositions of low diensional functions. The reason is that the low diensional coordinate projections of QMC rules can have very good equidistribution properties at saple sizes for which the original rule itself cannot have good equidistribution. This paper explores a converse proposal: that low effective diension is necessary for QMC to be uch better than MC in high diensions with practical saple sizes. 1 Introduction In soe high diensional applications Quasi-Monte Carlo (QMC ethods of integration give very good results that are not easily explained by the Koksa-Hlawka inequality or other discrepancy bounds. By inspecting the proofs of upper bounds on error one ight suspect that the asyptotes would set in at a nuber n of evaluations that grows at least exponentially with the diension d. Indeed, Sloan and Wozniakowski (1998 show that in a worst case setting, that QMC (using lattices is no better than estiating the integrand by zero until n 2 d. Yet in nuerical exaples, particularly soe arising in coputational finance (Paskov and Traub 1995; Ninoiya and Tezuka 1996, it is soeties observed that QMC ethods in diensions as high as 360 can provide very good accuracy at practically relevant saple sizes n. One explanation that has been offered is that the integrands ay have effective diension saller than d. Specifically, the integrand ay be nearly a su of lower diensional parts as described in Caflisch, Morokoff, and Owen (1997. Then if the QMC rule used has good equidistribution in its low diensional coordinate projections, an accurate result is not surprising. 1

2 As Tezuka (2002 notes, a second class of high diensional integrands for which good results have been seen in QMC are certain isotropic integration probles of Capstick and Keister (1996. Such probles reduce to coputing the expectation of a function of the nor of a d diensional spherical Gaussian rando vector. Papageorgiou and Traub (1997 report good results for QMC on such probles. Recently Owen (2001 has shown that polynoials in the squared nor have effective diension no larger than their degree. Furtherore, nuerical investigation of a 25 diensional isotropic function published in Papageorgiou and Traub (1997 show that it is very nearly a superposition of functions of 3 or fewer of the input variables. Because the isotropic integrands appeared to be also of low effective diension, it is interesting to conjecture that low effective diension is soehow a necessary condition for QMC to work well at values of n below those where the discrepancy bounds apply. It is clear that În I can be sall without f necessarily having low superposition diension. For instance any f continuous on [0, 1] d, of whatever effective diension, always has at least one point x with f(x = I and so n = 1 is copatible with În = I. We will frae the proble in a way that rules out such cases, because no general purpose integration algoriths are based on finding such an x. In this paper we find that low effective diension is necessary for scrabled (0,, d-nets to have a uch saller variance than ordinary Monte Carlo, in high diensions and for practical saple sizes. We do however uncover a surprising free lunch phenoenon: it is possible to have a scrabled net variance of zero on certain nonzero functions of effective diension + 1 or + 2, but this effect requires a saple size n of at least (d 1 d 2. Section 2 introduces soe notation, including the ANOVA decoposition of L 2 [0, 1] d. Section 3 discusses effective diension and forula (5 there shows how low effective diension can lead to upper bounds on quadrature error. We are interested in a converse which we base on scrabled nets as outlined in Section 4. Theore 1 there shows that if a scrabled (t,, d- net (for 1 < d has variance below 0.01/Γ ties that of ordinary Monte Carlo sapling, that the function f is necessarily of superposition diension at ost. The quantity Γ 1 is a iniu gain coefficient for the net, and Section 5 describes how far below unity Γ can be, for (0,, d- nets. Section 6 discusses the results and considers how generally necessity of low superposition diension ight hold. Section 7 proves soe results on iniized gain coefficients. 2

3 2 Notation We consider approxiating I = [0,1] d f(xdx by Î n = 1 n n f(x i (1 i=1 for points x 1,..., x n [0, 1] d. Equation (1 includes Monte Carlo (MC and quasi-monte Carlo (QMC ethods that doinate practice when d is large. If f L 2 [0, 1] d, then siple Monte Carlo sapling with independent x i uniforly distributed on [0, 1] d yields a rando În with ean I and variance σ 2 /n. The error În I is of order n 1/2 in probability, written O p (n 1/2. The law of the iterated logarith establishes that the slightly larger error bound În I = O([n 1 log log(n] 1/2 holds with probability 1. If f BVHK[0, 1] d, the space of functions of bounded variation in the sense of Hardy and Krause, then În I D n(x 1,..., x n f HK (2 The first indications that QMC could be significantly better than MC arose fro constructions of points x 1,..., x n for which Dn = O(n 1+ɛ. The saple size n at which the asyptotic error rates for QMC should set in is thought to grow exponentially with diension d. Sloan and Wozniakowski (1998 show that in a worst case sense QMC is no better than estiating the integrand by zero until n 2 d. Owen (1997b finds that scrabled Faure sequences becoe uch better than Monte Carlo at roughly n d d while Owen (1998b estiates a threshold at roughly n 4 d for Niederreiter-Xing sequences. For f L 2 [0, 1] d there is an analysis of variance (ANOVA decoposition with f(x = f u (x. (3 u {1,...,d} The function f u (x depends on x = (x 1,..., x d only through coponents x j with j u. It also satisfies 1 0 f u(xdx j = 0 whenever j u for any values of x k, k j. The nae ANOVA arises because the variance of f is σ 2 = (f(x I 2 dx, attributable to subsets u of inputs via σ 2 = u σ 2 u (4 where σ 2 = 0 and σ2 u = f u (x 2 dx for u with cardinality u > 0. To rule out trivial cases we assue 0 < σ 2 <. 3

4 3 Effective diension Caflisch, Morokoff, and Owen (1997 define the effective diension of a function in two senses. The function f has effective diension s in the superposition sense if σu σ 2 u s and it has effective diension s in the truncation sense if σu σ 2. u {1,...,s} The idea of effective diension appears in Paskov and Traub (1995 where they reark that certain integrands fro finance are not essentially deterined by just a sall nuber of leading input variables. Sloan and Wozniakowski (1998 introduce classes of functions in which the iportance of each successive variable x j decays as j increases. Such functions can have sall truncation diension relative to their noinal diension. The definitions above capture two notions in which f is alost s diensional. As Caflisch, Morokoff, and Owen (1997 reark, the choice of 99 th percentile is arbitrary. This paper uses the 99 th percentile for definiteness sake. Hickernell (1998 akes the threshold percentile a paraeter in the definition. The quadrature error in a QMC rule x 1,..., x n satisfies the bound Î n I D n, u (x u 1,..., x u n f u (5 u >0 where x u i is the coordinate projection of x i onto the subset u of input variables, D n, u is a discrepancy for n points in [0, 1] u and f u is a copatible nor. A version of forula (5 appears in Caflisch, Morokoff, and Owen (1997 and several versions are given by Hickernell (1998. The upper bound (5 shows how low superposition diension can guarantee good perforance in QMC. Suppose that the function f belongs to a class in which f u is sall whenever u > s for 1 s < d. Then (5 provides for a sall error, when we use a ethod with D n, u sall for u s. Many widely studied discrepancies allow for tight versions of (5. For those discrepancies, given x 1,..., x n we ay find f such that În I = D n, u (x u 1,..., xu n f u and f = f u. Then, given lower bounds on D n, u for large u, low effective diension is necessary for În I to be uniforly sall over functions with f 1. 4

5 4 Scrabled nets Digital nets are defined in Niederreiter (1992. A ethod of scrabling of the was proposed in Owen (1995. The discrepancy of scrabled nets was studied in Hickernell and Yue (2000. The variance forulas for scrabled nets below are fro Owen (1997a. The variance of În when x i are scrabled versions of a i [0, 1] d is 1 n Γ u,κ σu,κ. 2 u >0 κ Here Γ u,κ is the gain coefficient corresponding to the subset u {1,..., d} and the vector κ containing u nonnegative integers, defined as 1 n n Γ u,κ = (bn n(b 1 u i,j,r W i,j,r, (6 where and i=1 j=1 r u N i,j,r = N i,j,r (κ = 1 b k j +1 a r i = bk j +1 a r j W i,j,r = W i,j,r (κ = 1 b k j a r i = bkj a r j are indicator variables designating narrow and wide atches respectively between the coponents a r i and ar j. This forula holds for any a 1,..., a n [0, 1] d but it can siplify for nets, especially nets for which the quality paraeter t is 0. The notation Γ u,κ suppresses the dependence of the gain coefficient on b and. When this dependence ust be ade explicit, the notation Γu,κ b, will be used. The variance of Î n under scrabling is a su of contributions fro every nonepty u and every vector κ {0, 1,... } u. Because σu 2 = κ σ2 u,κ a ethod with all gains Γ u,κ = 1 has the sae variance as Monte Carlo sapling. Theore 1 Let f L 2 [0, 1] d, let În be the quadrature rule (1. Denote the variance of În by Var snet (În or Var c (În depending on whether the x i are a scrabled (t,, d-net with 1 < d in base b or siple Monte Carlo respectively. Suppose that Var(Îrqc ɛvar(îc for 0 < ɛ < 1. Then the function f satisfies u > σ2 u σ 2 ( 1 ɛ in Γ u,κ. u >,κ {0,1,... } u 5

6 Proof: Under the hypothesis of the theore, ɛ Var snet(în Var c (În u > κ Γ u,κ σ 2 u,κ σ 2 ( in Γ u,κ u >,κ u > σ 2 u σ 2. The following corollary is iediate: Corollary 1 If scrabled (t,, d-net integration of f has variance saller than 0.01/ in u >,κ Γ u,κ ties that of ordinary Monte Carlo, then f has effective diension at ost. 5 Lower bounds on gain coefficients To draw any practical conclusions fro Theore 1 and the Corollary requires lower bounds on gain coefficients Γ b, u,κ for u >. Fro the defining property of a (t,, d-net it follows (Owen 1997a that Γ u,κ = 0 when u + κ t. When t > 0 and u + κ > t the net property does not uniquely define Γ u,κ. Fro here on, we restrict attention to the gain coefficients for (t,, d- nets with t = 0. For (0,, d-nets, it is known that [ ( Γ u,κ = 1 + (1 b u ( b k k k j=0 ( ] u ( b j. (7 j Here, and in the following, the integers u and κ are replaced by u and k in expressions for Γ. This reduces clutter and leads to no loss of generality because for a (0,, d-net the gain depends on the subset u and the vector κ only through their cardinality and coponent su respectively. We suppose also that b is a prie power. Most digital net constructions use prie power bases. Niederreiter (1987 shows how to construct digital nets in ore general bases b 2 fro those with prie power bases, but the resulting nets have coparatively sall diension d. It follows easily fro (7 that Γ u,k = 1 for k. Also (0,, d-nets in base b can only exist when d b + 1. Thus for any b and, the interesting gain coefficients can be arranged into a b + 1 by table. As a consequence of Theore 1 we are particularly interested in Γ = Γ b, in Γ b, u>,k 0 u,k, 6

7 Table 1: Shown are the gain coefficients for randoized (0, 4, d-nets in base 16. The value Γ u, κ appears in the row with u on the left and in the colun headed by κ. The largest relevant value of u is 17 because u d b + 1 = 17. The upper left corner of exact zeros is left blank. For κ = 4 the gain is exactly one. The other values have been rounded. Rows 11 through 16 look the sae as rows 10 and 17. for < d. If Γ is near one then a variance reduction of slightly over 100- fold for scrabled (0,, d-nets iplies that f has effective diension at ost. Table 1 shows the gain coefficients for scrabled (0, 4, d nets in base 16. These nets have n = points in [0, 1] d. The sallest gain coefficient for u > 4 is Γ = Γ 6,0 = / = Theore 1 shows that if scrabled net sapling has variance ɛ ties as large as ordinary Monte Carlo sapling then the fraction of the variance of f due to ANOVA coponents of diension 5 or ore is at ost ɛ/ = ɛ To conclude that f has effective diension 5 or less we need to find that the scrabled net variance is no larger than ties the Monte Carlo variance. The values Γ 16,4 u,k in Table 1 below the row for u = 4 are alternately above and below unity as u or k increases. When u > the gain Γ u,k is at least 1 for odd u + k and at ost 1 for even u + k. Moving fro left to right in the table the gains approach one. When u is plus an odd positive nuber, the gains decrease to unity as k increases through even values beginning with 0. Siilarly when u > 0 is odd and k increases through odd values beginning with 1, the gains increase to unity. When u 7

8 is plus an even positive nuber, the trends described above are reversed. Lea 1 below shows that these onotone trends hold in generality. It follows that the search for Γ can be restricted to cases of the for Γ +2r,0 for r 1 or Γ +2r+1,1 for r 0. It also holds in Table 1 that Γ +2r+1,1 Γ +2r+2,0 when r 0 and + 2r + 2 b + 1. Lea 2 below shows that this pattern holds in generality, and so the search for the iniu Γ can ordinarily be restricted to Γ +2r+2,0 for r 0. There is an exception when = b, for then u = +2 = b+2 is inapplicable and so the iniizer ust be Γ b+1,1. Finally the ters Γ +2r+2,0 in Table 1 are nondecreasing as r 0 increases. This too is a general phenoenon (Lea 3 below and so the iniizer is Γ = Γ +2,0 when b 1. Theore 2 Let b 2 be an integer. For a non-negative integer b 1 Γ = in in Γ u,k = Γ +2,0, <u b+1 k 0 and for = b Γ = in in Γ u,k = Γ +1,1. <u b+1 k 0 Proof: See Section 7. The iniizing gain coefficients siplify. They can be surprisingly sall as the next proposition shows. Proposition 1 Let b 2 and 0 be integers. Then ( Γ b, b +2,0 = b 1, for b 1 and, b 1 b 1 ( Γ b, b +1,1 = b, for b. b 1 b In particular Γ b,b 1 b+1,0 = Γ b,b b+1,1 = 0. Proof: The result follows after a short anipulation of binoial coefficients. The alternating su within the square brackets of (7 atches all but u +k ters of u ( u j=0 j ( b j, which equals (1 b u by the binoial theore. For the cases here u + k = 2. This proposition provides an astonishing exaple of an apparently free lunch. For a (0,, d-net certain hypercubical subsets of [0, 1] d are guaranteed to contain a nuber of points equal to b ties their volue. We 8

9 say these sub-cubes are balanced by the net. For < d the balanced sub-cubes have at least d sides of length 1, and in that sense are less than fully d diensional. It is a surprise to be able to integrate exactly soe fully d diensional integrands using nets that do not balance any fully d diensional sub-cubes. The phenoenon in Proposition 1 does not help with high diensional probles. First, the sallest gain coefficients are for diensions only one or two higher than. Second, the nuber of points in such nets is n = b which is either b b 1 or b b. Because b d 1 this phenoenon does not apply for n < (d 1 d 2. To further study Γ we suppose that < b + 1 so that Γ b, = Γ b, +2,0. The sallest values of this gain arise with sall b and large : Proposition 2 If 1 and b + 1 then Γ b, +2,0 increases with b. If b 2 and 0 b 1 then Γ b, +2,0 decreases with. Proof: Although Γ b, +2,0 is defined for integers, we ay interpolate by real values. Then ( b log Γ b, +2,0 = b b b 1 1 b 1 ( + 1 = b(b 1(b 1 0, and, ( ( log Γ b, b 1 +2,0 = log b 1 b 1 1 b 1 1 b 1 0. Very sall values of Γ are possible for n (d 1 d 2. Here we investigate nuerically how sall Γ can be for large diensions and practical saple sizes. We restrict attention to Γ b, b+2,0 as this is the iniu gain when b 1. Consider d 20 and n For d 20 the sallest prie power b d 1 is b = 19. For b = 19 and n 10 7 the largest we can have is log 19 (10 7 = 5. Therefore for d 20 and n 10 7, the iniu gain ust be at least Γ 19,5 7,0 = , which is not uch below one. Table 2 records the results of siilar calculations varying both the lower bound on d and the upper bound on n. 9

10 Table 2: This table shows iniu possible gain coefficients for scrabled (0,, d-nets. The rows are labelled with diensions d and the coluns are labelled with saple sizes n. Let b = b(d be the sallest prie power no saller than d 1. Shown is the sallest value of Γ b, u,k subject to b b(d, b n, < u d, and k 0. Zeros for d = 10 correspond to (0, 8, 10-nets in base 9. These have Γ 9,8 10,0 = 0 and n = 98 = 43, 046, 721. For large diensions and practical saple sizes Γ b, u,k cannot be appreciably saller than 1. In those settings, a variance reduction of just over 100 iplies that f has effective diension at ost. When the saple size can reach b b 2 then Γ b, u,k can becoe surprisingly sall. 6 Discussion The conclusion of this paper is that low effective diension is necessary for scrabled (0,, d-nets to be uch better than Monte Carlo for large d and practical n. A surprising free lunch phenoenon was found in which the scrabled net variance could be zero for soe nonzero functions of effective diension b + 1 when = b or b 1, but the free lunch was only seen for n (d 1 d 2. There are three iportant features to Theore 1. The first is that perforance of scrabled nets is studied relative to Monte Carlo, not absolutely. The second is that it is not asyptotic. The saple sizes used are of the for b for < d, including values below those at which QMC asyptotics are thought to take effect. The third is that it provides a conclusion about the function f itself, without reference to a containing function class. These features are iportant because they capture what surprised any observers: QMC can be uch better than MC for specific functions with large d at surprisingly sall n. A non-asyptotic analysis is essential because Var snet (În/Var c (În 0, for any f L 2 [0, 1] d, regardless of effective 10

11 diension. Siple counterexaples with spiky integrands show that low effective diension cannot be sufficient for good perforance of QMC, either absolutely or relatively. For exaple, let f(x = ɛ 1 1 x 1 ɛ. Then f has truncation and superposition diension both equal to 1 but cannot be integrated well by QMC for n 1/ɛ. It is also easy to see that sall truncation diension is not necessary for QMC to be uch better than MC. The linear function f(x = d j=1 xj is easy for QMC ethods, but has truncation diension at least 0.99d, for any re-ordering of the variables. Truncation diension is an iportant aspect of infinite diensional probles. Owen (1998a shows by a artingale arguent that any square integrable function on [0, 1] necessarily has finite effective diension in the truncation sense, for any threshold less than 100 percent. Recent work of Sloan (2002 on function classes with successively less iportant diensions shows that a sall quadrature error can be obtained for soe functions having high superposition diension. The iportance of input j decays rapidly, as quantified by a series of constants γ j for j 1. Let f be a purely 1000 diensional function in that class involving only diensions 1, 000, 001 through 1, 001, 000. Then f can be integrated with a sall error despite having superposition diension These tractability results are not at odds with the thesis of this paper. The forer study perforance relative to the zero rule Î = 0 while this paper considers perforance relative to Monte Carlo ethods. The function f would have a sall nor in order to fit in the space defined by the γ j sequence. A sall nor for f would also lead to a sall Monte Carlo variance and it is not clear that QMC would beat MC for this f. Low superposition diension is necessary for scrabled (0,, d-nets to beat MC by a wide argin at odest saple sizes in high diensions. But this does not show that low superposition diension is universally necessary for this phenoenon. Perhaps other general purpose QMC ethods can beat MC by a wide argin for odest n and integrands of high superposition diension. To show siilar results for scrabled (t,, d-nets with t > 0 requires lower bounds on gain coefficients Γ u,κ for t > 0. The definition of a (t,, d- net is not sharp enough to deterine gain coefficients when t > 0. Upper bounds on gain coefficients appear in Owen (1998b, Niederreiter and Pirsic (2001, and Yue and Hickernell (2002. A quadrature rule ay be described by a great any nonnegative t-values, one for each of a class of subintervals of [0, 1] d. The net noenclature specifies only the largest of these. When 11

12 the largest is zero then they are all zero. But in general a net with t > 0 can have saller t even t = 0 when projected into a set u of coponents. Schid (2001 describes this phenoenon. Yue (1999 provides gain coefficients for soe leading subsequences of (0, d-sequences in base b that like (λ, 0,, d- nets are not necessarily nets. It is interesting to speculate on whether necessity of low superposition diension ight hold outside of scrabled nets. Heinrich, Hickernell, and Yue (2001 show that scrabled nets are asyptotically optial quadrature rules in various settings. The function classes are defined by the decay of certain Haar wavelet coefficients and three approxiation senses are considered: worst case, rando case, and average case, in their terinology. The rando case setting is closest to the one considered here, but their results are not relative to Monte Carlo and are asyptotic. 7 Proofs We begin by recalling that ( n r = 0 if r < 0 or r > n and that the binoial identity ( ( n r = n 1 ( r + n 1 r 1 holds even when r > n 1 or r < 0. We will also use ( n ( r n = n r + 1 (8 r r 1 but only for 0 < r n + 1. This first lea establishes that certain onotone alternations seen in gain tables hold generally. Lea 1 For non-negative integers b,, r, and s, with b 2, b and + 2r + 1 b + 1, Γ +2r+1,2s Γ +2r+1,2s+2 1, (9 Γ +2r+1,2s+1 Γ +2r+1,2s+3 1. (10 For non-negative integers b,, r, and s, with b 2, b and + 2r b + 1, Γ +2r,2s Γ +2r,2s+2 1, (11 Γ +2r,2s+1 Γ +2r,2s+3 1. (12 Proof: We prove (11. The proofs of the other three propositions use the sae sequence of techniques. If 2s then Γ +2r,2s = Γ +2r,2s+2 = 1, so 12

13 without loss of generality we suppose 2s <. Use u = + 2r to shorten soe interediate expressions. Γ +2r,2s+2 Γ +2r,2s [ ( = (1 b u ( b 2s 2 2s 2 ( ( b 2s + 2s 2s j=0 ( u j ( b j ] 2s 2 j=0 ( u ( b j j [ ( ( = (1 b u ( b 2s 2 ( b 2s 2s 2 2s ( ( ] u u + ( b 2s 1 + ( b 2s 2s 1 2s [( ( = (1 b u ( b 2s 2 b 2 2s 2 2s ( ( ] u u b + b 2 2s 1 2s [( ( = (b 1 u b 2s 2 ( 1 2r+ 2s 2 b 2 2s 2 2s ( ( ( ( ] b b + b 2 + b 2 2s 1 2s 2 2s 2s 1 [( ( ] = (b 1 u b 2s 2 (1 b + (b 2 b. 2s 2 2s 1 If 2s = 1 then the first ter in square brackets vanishes and so the entire factor in square brackets is nonnegative. If 2s < 1 then we apply (8 13

14 within the square brackets, obtaining ( ( (1 b + (b 2 b 2s 2 2s 1 ( ( ( ( 2s = (b 1 b 2s 2 2s 1 ( ( 2r + 2s + 1 = (b 1 b 2s 2 2s 1 1 ( ( b (b 1 2s 2 2s 1 1 ( ( b (b 1 2s , 1 where we have used 2s 1 > 0 and b. It follows that Γ +2r,2s+2 Γ +2r,2s 0 establishing (11. To siplify soe anipulations, we introduce the ter Differences Su Su r yields Su+1 = S u S u = j=0 ( u ( b j. j reduce to sus of r ters. Also the binoial identity bsu 1 and Su+2 = S u 2bSu 1 + b 2 Su 2. The next lea shows that for any sall gain coefficient with k = 1 there is ordinarily one with k = 0 that is no larger. Lea 2 For non-negative integers b,, and r, with b 2, b and + 2r + 1 b + 1, Γ +2r+1,1 Γ +2r+2,0. (13 14

15 Proof: Γ +2r+1,1 Γ +2r+2,0 [ ( + 2r = (1 b 2r 1 ( b 1 1 ] S+2r+1 1 [ ( ] + 2r + 1 (1 b 2r 2 ( b S+2r+2 [ ( ( + 2r + 2r + 1 = (1 b 2r 2 ( b 1 (1 b ( b 1 ] (1 bs+2r S +2r+1 bs+2r+1 1 [ ( ( + 2r + 2r + 1 = (1 b 2r 2 ( b 1 (1 b ( b 1 ( ] + 2r ( b ( + 2r = (1 b 2r 2 ( b 1 (1 b 1 ( + 2r = (b 1 2r 1 b 1 ( 1 2r The final lea shows that the eleents Γ +2r,0 in the k = 0 colun of the gain table increase with r. Lea 3 For nonnegative integers b,, and r, with b 2, 1 b and + 2r + 2 b + 1, Γ +2r+2,0 Γ +2r,0. (14 15

16 Proof: Γ +2r+2,0 Γ +2r,0 [ ( + 2r + 1 = (1 b 2r 2 ( b (1 b 2r [ ( b ( + 2r 1 [ ( + 2r + 1 = (1 b 2r 2 ( b ] ] S+2r+2 ] S+2r ( + 2r 1 (1 b 2 ( b S+2r+2 + (1 b 2 S+2r [ ( ( + 2r r 1 = (1 b 2r 2 ( b (1 b 2 ( b ] S+2r + 2bS+2r 1 b2 S+2r 2 + (1 2b + b2 S+2r = (1 b 2r 2 [ ( b ( + 2r + 1 ] b 2 (S+2r S+2r 2 2b(S +2r S+2r 1 (1 b 2 ( b ( + 2r 1 [( ( + 2r r 1 = (1 b 2r 2 ( b (1 b 2 ( ( ( ] + 2r + 2r + 2r b + b 2 2b. 1 The product outside the square brackets is positive. The factor within square brackets siplifies to ( ( + 2r 1 + 2r (b (1 b 1 1 ( [ + 2r 1 = (b r ] 1 1 b 1 2r + 1 0, because + 2r b 1. Proof of Theore 2: Suppose b 1. For u = + 2r b + 1 with r 0 the iniu value of Γ u,k for k 0 is Γ u,0 by the alternating gain Lea 1. Siilarly for u = + 2r + 1 b + 1 with r 0 the iniu value of Γ u,k for k 0 is Γ u,1. Therefore the iniizing Γ u,k has the for 16

17 Γ +2r,0 or Γ +2r+1,1. But Lea 2 shows that Γ +2r+1,1 Γ +2r+2,0 in this case. Therefore the iniizing Γ u,k has the for Γ +2r,0. Finally Lea 1 show that the iniizing Γ u,k is Γ +2,0. When = b then + 2 > b + 1 and the only eligible Γ u,k entries are Γ b+1,k. Lea 1 then shows that the iniizing value is Γ b+1,1. Acknowledgents I thank Grzegorz Wasilkowski for helpful coents, and I also thank an anonyous reviewer for coents that otivated iportant changes in this paper. This work was supported by the NSF under grant DMS References Caflisch, R. E., W. Morokoff, and A. B. Owen (1997. Valuation of ortgage backed securities using Brownian bridges to reduce effective diension. Journal of Coputational Finance 1, Capstick, S. and B. D. Keister (1996. Journal of Coputational Physics 123 (2, Heinrich, S., F. J. Hickernell, and R. X. Yue (2001. Optial quadrature for Haar wavelet spaces. subitted to Math. Cop. Hickernell, F. J. (1998. Lattice rules: how well do they easure up? In P. Hellekalek and G. Larcher (Eds., Rando and Quasi-Rando Point Sets, pp New York: Springer. Hickernell, F. J. and R. X. Yue (2000. The ean square discrepancy of scrabled (t, s-sequences. SIAM Journal of Nuerical Analysis 38, Niederreiter, H. (1987. Point sets and sequences with sall discrepancy. Monatshefte fur atheatik 104, Niederreiter, H. (1992. Rando Nuber Generation and Quasi-Monte Carlo Methods. Philadelphia, PA: S.I.A.M. Niederreiter, H. and G. Pirsic (2001, Dec.. The icrostructure of (t,,s- nets. Journal of Coplexity 17 (4, Ninoiya, S. and S. Tezuka (1996. Toward real-tie pricing of coplex financial derivatives. Appl. Math. Finance 3,

18 Owen, A. B. (1995. Randoly peruted (t,, s-nets and (t, s- sequences. In H. Niederreiter and P. J.-S. Shiue (Eds., Monte Carlo and Quasi-Monte Carlo Methods in Scientific Coputing, New York, pp Springer-Verlag. Owen, A. B. (1997a. Monte Carlo variance of scrabled equidistribution quadrature. SIAM Journal of Nuerical Analysis 34 (5, Owen, A. B. (1997b. Scrabled net variance for integrals of sooth functions. Annals of Statistics 25 (4, Owen, A. B. (1998a. Latin supercube sapling for very high diensional siulations. ACM Transactions on Modeling and Coputer Siulation 8 (2, Owen, A. B. (1998b, Deceber. Scrabling Sobol and Niederreiter-Xing points. Journal of Coplexity 14 (4, Owen, A. B. (2001. The diension distribution and quadrature test functions. Technical report, Stanford University, Departent of Statistics. Papageorgiou, A. and J. Traub (1997. Faster evaluation of ultidiensional integrals. Coputers in Physics, Paskov, S. and J. Traub (1995. Faster valuation of financial derivatives. The Journal of Portfolio Manageent 22, Schid, W. C. (2001. Projections of digital nets and sequences. Matheatics and Coputers in Siulation 55, Sloan, I. (2002. Qc integration beating intractability by weighting the coordinate directions. In K. T. Fang, F. J. Hickernell, and H. Niederreiter (Eds., Monte Carlo and Quasi-Monte Carlo Methods 2000, Berlin, pp Springer-Verlag. Sloan, I. H. and H. Wozniakowski (1998. When are quasi-monte Carlo algoriths efficient for high diensional integration? Journal of Coplexity 14, Tezuka, S. (2002. Quasi-Monte Carlo: discrepancy between theory and practice. In K. T. Fang, F. J. Hickernell, and H. Niederreiter (Eds., Monte Carlo and Quasi-Monte Carlo Methods 2000, Berlin, pp Springer-Verlag. Yue, R.-X. (1999. Variance of quadrature over scrabled unions of nets. Statistica Sinica 9 (2, Yue, R. X. and F. J. Hickernell (2002. The discrepancy and gain coefficients of scrabled digital nets. Journal of Coplexity 18,

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

(t, m, s)-nets and Maximized Minimum Distance, Part II

(t, m, s)-nets and Maximized Minimum Distance, Part II (t,, s)-nets and Maxiized Miniu Distance, Part II Leonhard Grünschloß and Alexander Keller Abstract The quality paraeter t of (t,, s)-nets controls extensive stratification properties of the generated

More information

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1]

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1] MATHEMATICS OF COMPUTATION Volue 00, Nuber 0, Pages 000 000 S 0025-578(XX)0000-0 THE CONSTRUCTION OF GOOD EXTENSIBLE RANK- LATTICES JOSEF DICK, FRIEDRICH PILLICHSHAMMER, AND BENJAMIN J. WATERHOUSE Abstract.

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed?

Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed? Low Discrepancy Sequences in High Dimensions: How Well Are Their Projections Distributed? Xiaoqun Wang 1,2 and Ian H. Sloan 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084,

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Journal of Machine Learning Research 5 (2004) 529-547 Subitted 1/03; Revised 8/03; Published 5/04 Coputable Shell Decoposition Bounds John Langford David McAllester Toyota Technology Institute at Chicago

More information

Research Article Rapidly-Converging Series Representations of a Mutual-Information Integral

Research Article Rapidly-Converging Series Representations of a Mutual-Information Integral International Scholarly Research Network ISRN Counications and Networking Volue 11, Article ID 5465, 6 pages doi:1.54/11/5465 Research Article Rapidly-Converging Series Representations of a Mutual-Inforation

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Lecture 21 Principle of Inclusion and Exclusion

Lecture 21 Principle of Inclusion and Exclusion Lecture 21 Principle of Inclusion and Exclusion Holden Lee and Yoni Miller 5/6/11 1 Introduction and first exaples We start off with an exaple Exaple 11: At Sunnydale High School there are 28 students

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

4 = (0.02) 3 13, = 0.25 because = 25. Simi-

4 = (0.02) 3 13, = 0.25 because = 25. Simi- Theore. Let b and be integers greater than. If = (. a a 2 a i ) b,then for any t N, in base (b + t), the fraction has the digital representation = (. a a 2 a i ) b+t, where a i = a i + tk i with k i =

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT PETER BORWEIN AND KWOK-KWONG STEPHEN CHOI Abstract. Let n be any integer and ( n ) X F n : a i z i : a i, ± i be the set of all polynoials of height and

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

The Euler-Maclaurin Formula and Sums of Powers

The Euler-Maclaurin Formula and Sums of Powers DRAFT VOL 79, NO 1, FEBRUARY 26 1 The Euler-Maclaurin Forula and Sus of Powers Michael Z Spivey University of Puget Sound Tacoa, WA 98416 spivey@upsedu Matheaticians have long been intrigued by the su

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

Statistics and Probability Letters

Statistics and Probability Letters Statistics and Probability Letters 79 2009 223 233 Contents lists available at ScienceDirect Statistics and Probability Letters journal hoepage: www.elsevier.co/locate/stapro A CLT for a one-diensional

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) =

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) = SOLUTIONS PROBLEM 1. The Hailtonian of the particle in the gravitational field can be written as { Ĥ = ˆp2, x 0, + U(x), U(x) = (1) 2 gx, x > 0. The siplest estiate coes fro the uncertainty relation. If

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

1. INTRODUCTION AND RESULTS

1. INTRODUCTION AND RESULTS SOME IDENTITIES INVOLVING THE FIBONACCI NUMBERS AND LUCAS NUMBERS Wenpeng Zhang Research Center for Basic Science, Xi an Jiaotong University Xi an Shaanxi, People s Republic of China (Subitted August 00

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A Note on Online Scheduling for Jobs with Arbitrary Release Times

A Note on Online Scheduling for Jobs with Arbitrary Release Times A Note on Online Scheduling for Jobs with Arbitrary Release Ties Jihuan Ding, and Guochuan Zhang College of Operations Research and Manageent Science, Qufu Noral University, Rizhao 7686, China dingjihuan@hotail.co

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

Genetic Algorithms using Low-Discrepancy Sequences

Genetic Algorithms using Low-Discrepancy Sequences Genetic Algoriths using Low-Discrepancy Sequences ABSTRACT Shuhei Kiura Departent of Inforation and Knowledge Engineering, Faculty of Engineering, Tottori University 4-, Koyaa Minai, Tottori, JAPAN kiura@iketottori-uacjp

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION

RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION GUANGHUI LAN AND YI ZHOU Abstract. In this paper, we consider a class of finite-su convex optiization probles defined over a distributed

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Exponential sums and the distribution of inversive congruential pseudorandom numbers with prime-power modulus

Exponential sums and the distribution of inversive congruential pseudorandom numbers with prime-power modulus ACTA ARITHMETICA XCII1 (2000) Exponential sus and the distribution of inversive congruential pseudorando nubers with prie-power odulus by Harald Niederreiter (Vienna) and Igor E Shparlinski (Sydney) 1

More information

Some Proofs: This section provides proofs of some theoretical results in section 3.

Some Proofs: This section provides proofs of some theoretical results in section 3. Testing Jups via False Discovery Rate Control Yu-Min Yen. Institute of Econoics, Acadeia Sinica, Taipei, Taiwan. E-ail: YMYEN@econ.sinica.edu.tw. SUPPLEMENTARY MATERIALS Suppleentary Materials contain

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Polytopes and arrangements: Diameter and curvature

Polytopes and arrangements: Diameter and curvature Operations Research Letters 36 2008 2 222 Operations Research Letters wwwelsevierco/locate/orl Polytopes and arrangeents: Diaeter and curvature Antoine Deza, Taás Terlaky, Yuriy Zinchenko McMaster University,

More information

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone Characterization of the Line Coplexity of Cellular Autoata Generated by Polynoial Transition Rules Bertrand Stone Abstract Cellular autoata are discrete dynaical systes which consist of changing patterns

More information

Descent polynomials. Mohamed Omar Department of Mathematics, Harvey Mudd College, 301 Platt Boulevard, Claremont, CA , USA,

Descent polynomials. Mohamed Omar Department of Mathematics, Harvey Mudd College, 301 Platt Boulevard, Claremont, CA , USA, Descent polynoials arxiv:1710.11033v2 [ath.co] 13 Nov 2017 Alexander Diaz-Lopez Departent of Matheatics and Statistics, Villanova University, 800 Lancaster Avenue, Villanova, PA 19085, USA, alexander.diaz-lopez@villanova.edu

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

time time δ jobs jobs

time time δ jobs jobs Approxiating Total Flow Tie on Parallel Machines Stefano Leonardi Danny Raz y Abstract We consider the proble of optiizing the total ow tie of a strea of jobs that are released over tie in a ultiprocessor

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Statistical properties of contact maps

Statistical properties of contact maps PHYSICAL REVIEW E VOLUME 59, NUMBER 1 JANUARY 1999 Statistical properties of contact aps Michele Vendruscolo, 1 Balakrishna Subraanian, 2 Ido Kanter, 3 Eytan Doany, 1 and Joel Lebowitz 2 1 Departent of

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information