This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Size: px
Start display at page:

Download "This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and"

Transcription

1 This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In ost cases authors are peritted to post their version of the article (e.g. in Word or Tex for) to their personal website or institutional repository. Authors requiring further inforation regarding Elsevier s archiving and anuscript policies are encouraged to visit:

2 Statistics and Probability Letters 86 (204) 9 98 Contents lists available at ScienceDirect Statistics and Probability Letters ournal hoepage: Tight lower bound on the probability of a binoial exceeding its expectation Spencer Greenberg a,, Mehryar Mohri a,b a Courant Institute of Matheatical Sciences, 25 Mercer Street, New Yor, NY 002, United States b Google Research, 76 Ninth Avenue, New Yor, NY 00, United States a r t i c l e i n f o a b s t r a c t Article history: Received 2 Noveber 20 Received in revised for Deceber 20 Accepted 2 Deceber 20 Available online 22 Deceber 20 Keywords: Binoial distribution Lower bound Expected value Relative deviation Machine learning We give the proof of a tight lower bound on the probability that a binoial rando variable exceeds its expected value. The inequality plays an iportant role in a variety of contexts, including the analysis of relative deviation bounds in learning theory and generalization bounds for unbounded loss functions. 20 Elsevier B.V. All rights reserved.. Motivation This paper presents a tight lower bound on the probability that a binoial rando variable exceeds its expected value. If the binoial distribution were syetric around its ean, such a bound would be trivially equal to /2. And indeed, when the nuber of trials for a binoial distribution is large, and the probability p of success on each trial is not too close to 0 or to, the binoial distribution is approxiately syetric. With p is fixed, and sufficiently large, the de Moivre Laplace theore tells us that we can approxiate the binoial distribution with a noral distribution. But, when p is close to 0 or, or the nuber of trials is sall, substantial asyetry around the ean can arise, as illustrated by Fig., which shows the binoial distribution for different values of and p. The lower bound we prove has been invoed several ties in the achine learning literature, starting with wor on relative deviation bounds by Vapni (998), where it is stated without proof. Relative deviation bounds are useful bounds in learning theory that provide ore insight than the standard generalization bounds because the approxiation error is scaled by the square root of the true error. In particular, they lead to sharper bounds for epirical ris iniization, and play a critical role in the analysis of the generalization bounds for unbounded loss functions (Cortes et al., 200). This binoial inequality is entioned and used again without proof or reference in Anthony and Shawe-Taylor (99), where the authors iprove the original wor of Vapni (998) on relative deviation bounds by a constant factor. The sae clai later appears in Vapni (2006) and iplicitly in other publications referring to the relative deviations bounds of Vapni (998). To the best of our nowledge, there is no publication giving an actual proof of this inequality in the achine learning literature. Our search efforts for a proof in the statistics literature were also unsuccessful. Instead, soe references suggest in fact that such a proof is indeed not available. In particular, we found one attept to prove this result in the context Corresponding author. E-ail address: sgg247@nyu.edu (S. Greenberg) /$ see front atter 20 Elsevier B.V. All rights reserved.

3 92 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) 9 98 Fig.. Plots of the probability of getting different nubers of successes, for the binoial distribution B(, p), shown for three different values of, the nuber of trials, and p, the probability of a success on each trial. Note that in the second and third iage, the distribution is clearly not syetrical around its ean. of the analysis of soe generalization bounds (Jaeger, 2005), but the proof is not sufficient to show the general case needed for the proof of Vapni (998), and only pertains to cases where the nuber of Bernoulli trials is large enough. A concise proof of this inequality for the special case where p (Rigollet and Tong, 20) was also recently brought 2 to our attention. However that proof technique does not see to readily extend to the general case. We also derived an alternative straightforward proof for the sae special case using Slud s inequality following a suggestion of Luc Devroye (private counication). However, the proof of the general case turned out to be ore challenging. Our proof therefore sees to be the first rigorous ustification of this inequality, which is needed, aong other things, for the analysis of relative deviation bounds in achine learning. In Section 2, we start with soe preliinaries and then give the presentation of our ain result. In Section, we give a detailed proof of the inequality. 2. Main result The following is the standard definition of a binoial distribution. Definition. A rando variable X is said to be distributed according to the binoial distribution with paraeters (the nuber of trials) and p (the probability of success on each trial), if for = 0,,..., we have P[X = ] = p ( p). () The binoial distribution with paraeters and p is denoted by B(, p). It has ean p and variance p( p). The following theore is the ain result of this paper. Theore. For any positive integer and any probability p such that p >, let X be a rando variable distributed according to B(, p). Then, the following inequality holds: P X E [X] > 4, (2) where E[X] = p is the expected value of X. The lower bound is never reached but is approached asyptotically when = 2 as p fro the right. Note that when 2 = 2, the case p = is excluded fro consideration, due to our assuption p >. In words, the theore says that a 2 coin that is flipped a fixed nuber of ties always has a probability of ore than /4 of getting at least as any heads as the expected value of the nuber of heads, as long as the coin s chance of getting a head on each flip is not so low that the expected value is less than or equal to. The inequality is tight, as illustrated by Fig. 2.. Proof Our proof is based on the following series of leas and corollaries and aes use of Cap Paulson s noral approxiation to the binoial cuulative distribution function (Johnson et al., 995, 2005; Lesch and Jese, 2009). We start with a lower bound that reduces the proble to a sipler one. Lea. For all =, 2,..., and p (, + ], the following inequality holds: P X B(,p) [X E[X]] P X B(, )[X + ].

4 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) Fig. 2. This plot depicts P[X E[X]], the probability that a binoially distributed rando variable X exceeds its expectation, as a function of the trial success probability p. Each colored line corresponds to a different nuber of trials, = 2,,..., 8. Each colored line is dotted in the region where p, and solid in the region that our proof pertains to, where p >. The dashed horizontal line at represents the value of the lower bound. Our theore is 4 equivalent to saying that for all positive integers (not ust the values of shown in the plot), the solid portions of the colored lines never cross below the dashed horizontal line. As can be seen fro the figure, the lower bound is nearly et for any values of. Proof. Let X be a rando variable distributed according to B(, p) and let F(, p) denote P[X E[X]]. Since E[X] = p, F(, p) can be written as the following su: F(, p) = p ( p). = p We will consider the sallest value that F(, p) can tae for p (, ] and a positive integer. Observe that if we restrict p to be in the half open interval I = (, ], which represents a region between the discontinuities of F(, p) which result fro p, then we have p (, ] and so p =. Thus, we can write p I, = 0,,..., F(, p) = p ( p). The function p F(, p) is differentiable for all p I and its differential is F(, p) = ( p) p ( p). p = = Furtherore, for p I, we have p, therefore p (since in our su ), and so F(,p) 0. The inequality is p in fact strict when p 0 and p since the su ust have at least two ters and at least one of these ters ust be positive. Thus, the function p F(, p) is strictly increasing within each I. In view of that, the value of F(, p) for p I + is lower bounded by li p + F(, p), which is given by li F(, p) = p + = P X B(, =+ Therefore, for =, 2,...,, whenever p (, + ] we have F(, p) P X B(, )[X + ]. )[X + ]. Corollary. For all p (, ), the following inequality holds: P X B(,p) [X E[X]] ax P {,..., } X B(, )[X ].

5 94 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) 9 98 Fig.. For = 2, 22,..., 72 and, this plot depicts the values of P X B(, )[X ] as colored dots (with one color per choice of ), against the values of the bound fro Lea 2, which are shown as short horizontal lines of atching color. The upper bound that we need to deonstrate,, is 4 shown as a blue horizontal line. Proof. By Lea, the following inequality holds P X B(,p) [X E[X]] The right-hand side is equivalent to in P {,..., } X B(, )[X + ]. in P {,..., } X B(, )[X ] = ax which concludes the proof. {,..., } P X B(, )[X ] In view of Corollary, in order to prove our ain result, it suffices that we upper bound the expression P X B(, )[X ] = =0 by for all integers 2 and. Note that the case = is irrelevant since the inequality p > assued for 4 our ain result cannot hold in that case, due to p being a probability. The case = 0 can also be ignored since it corresponds to p. Finally, the case = is irrelevant, since it corresponds to p >. We note, furtherore, that when p = that iediately gives P X B(,p) [X E[X]] =. 4 Now, we introduce soe leas which will be used to prove our ain result. Lea 2. The following inequality holds for all =, 2,..., : β θ + P X B(, )[X ] Φ γ, , β + γ, where Φ: x x e s2 2 ds is the cuulative distribution function for the standard noral distribution and β 2π, γ,, and θ are defined as follows: β = + + 2/, γ, =, θ = 7 2 / 2/ Proof. Our proof aes use of Cap Paulson s noral approxiation to the binoial cuulative distribution function (Johnson et al., 995, 2005; Lesch and Jese, 2009), which helps us reforulate the bound sought in ters of the noral distribution. The Cap Paulson approxiation iproves on the classical noral approxiation by using a non-linear transforation. This is useful for odeling the asyetry that can occur in the binoial distribution. The Cap Paulson bound can be stated as follows (Johnson et al., 995, 2005): c µ P X B(,p)[X ] Φ σ p( p)

6 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) where c = ( b)r /, µ = a, σ = br 2/ + a, a = 9 9, b = ( + )( p), r = p p Plugging in the definitions for all of these variables yields / c µ (+)( p) 9 + p + 9 Φ = Φ σ 2/. (+)( p) 9 + p + 9 Applying this bound to the case of interest for us where p = and =, yields c µ σ = α + γ,, β + γ, with α = + / +, and with β = ( + + )2/. Thus, we can write P X B(, )[X ] Φ α + γ, β + γ, () To siplify this expression, we will first upper bound α in ters of β. To do so, we consider the ratio α = ( + )/ ( ) + β ( + = [( + )/ ] + )2/ ( + + )2/ ( +. )/ Let λ = ( + )/, which we can rearrange to write follows: α = λ [(λ )] β (λ )λ 2 λ = λ + λ + λ 2 λ. = λ + λ The expression is differentiable and its differential is given by d α = ( + λ + λ2 ) λ(2λ + ) + dλ β ( + λ + λ 2 ) 2 λ 2 = ( λ 2 ) ( + λ + λ 2 ) 2 + λ 2 = 8(λ )4 0(λ ) 0(λ ) λ 2 + λ + λ 2 2., with λ (, 2 / ]. Then, the ratio can be rewritten as For λ (, 2 ], λ , thus, the following inequality holds: 8(λ ) 4 + 0(λ ) + 0(λ ) 2 8(2 ) 4 + 0(2 ) + 0(2 ) < 9. Thus, the derivative is positive, so α is an increasing function of β λ on the interval (, 2 ] and its axiu is reached for λ = 2. For that choice of λ, the ratio can be written = = θ , which upper bounds α β. Since Φ[x] is a strictly increasing function, using α θβ yields Φ α + γ, β θ + Φ γ,. β + γ, β + γ, (4)

7 96 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) 9 98 We now bound the ter ( The quadratic function ( ) for =, 2,...,, achieves its iniu ) at =, giving ( ) ( ). Thus, in view of () and (4), we can write P B, Φ β θ + γ, β + γ, This concludes the proof. As illustrated by Fig., the upper bound provided by Lea 2 closely approxiates the true value. Lea. Let β = + + 2/ and γ, = for > and =, 2,...,. Then, the following inequality holds for θ = 7 2 : Φ 2 β θ + γ, β + γ, Φ β θ +. β + Proof. Since for =, 2,...,, we have γ,, the following inequality holds: β θ + γ, β + γ, β θ + ax γ γ [0,] β + γ. Since β = + + 2/ > 0, the function φ: γ β θ+ γ β +γ is continuously differentiable for γ [0, ]. Its derivative is given by φ (γ ) = γ +β (2 θ) 6(β +γ ) /2. Since 2 θ 0.56 < 0, φ (γ ) is non-negative if and only if γ β (θ 2). Thus, φ(γ ) is decreasing for γ < β (θ 2) and increasing for values of γ larger than that threshold. That iplies that the shape of the graph of φ(γ ) is such that the function s value is axiized at the end points. So ax γ [0,] φ(γ ) = ax(φ(0), φ()) = ax( β θ, β θ+ ). The inequality β β + θ β θ+ holds if and only if β β + (β + )θ 2 (β θ + )2, that is if β θ(9θ 6).022. But since β is a decreasing function of, it has β as its upper bound, and so this necessary requireent always holds. That eans that the axiu value of φ(γ ) for γ [0, ] occurs at γ =, yielding the upper bound β θ+ β, which concludes the proof. + Corollary 2. The following inequality holds for all 2 and 2: P X B(, )[X ] Proof. By Leas 2 and, we can write β θ + P X B(, )[X ] Φ β + Furtherore β = + + 2/ is a decreasing function of. Therefore, for 2, it ust always be within the range β [li β, β 2 ] = 0, [0, ], which iplies 2 2/ / β θ + β + β ax θ + 2 β + βθ + ax. β [0,β 2 ] β + The derivative of the differentiable function g: β βθ+ β+ is given by g (β) = (β+2)θ 6(β+) /2. We have that (β + 2)θ 6θ 6.77 > 0, thus g (β) 0. Hence, the axiu of g(β) occurs at β 2, where g(β 2 ) is slightly saller than Thus, we can write β θ + βθ + Φ Φ ax Φ[0.5968] < β + β [0,β 2 ] β +

8 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) Now, Hence, the following holds: Φ β θ + β + is a decreasing of function of, thus, for 2 it is axiized at = 2, yielding = 0.752, as required. The case = is addressed by the following lea. Lea 4. Let X be a rando variable distributed according to B(, ). Then, the following equality holds for any 2: P [X ] 4. Proof. For 2, define the function ρ by ρ() = P [X ] = = +. =0 The value of the function for = 2 is given by ρ(2) = =. Thus, to prove the result, it suffices to 4 show that ρ is non-increasing for 2. The derivative of ρ is given for all 2 by ρ () = ( ) 2 + (2 ) log. Thus, for 2, ρ () 0 if and only if 2 + (2 ) log 0. Now, for 2, using the first three ters of the expansion log = =, we can write (2 ) log (2 ) where the last inequality follows fro the proof = , 0 for 2. This shows that ρ () 0 for all 2 and concludes We now coplete the proof of our ain result, by cobining the previous leas and corollaries. Proof of Theore. By Corollary, we can write P X B(,p) [X E[X]] ax P {,..., } X B(, )[X ] = ax P X B(, )[X ], ax P {2,..., } X B(, )[X ] ax 4, = 4 = 4, where the last inequality holds by Corollary 2 and Lea 4. Corollary. For any positive integer and any probability p such that p <, let X be a rando variable distributed according to B(, p). Then, the following inequality holds: P X E [X] > 4, (5) where E[X] = p is the expected value of X. Proof. Let G(, p) be defined as G(, p) P X p E [X] = p ( p) and let F(, p) be defined as before as F(, p) P X E [X] = p ( p). =0 = p

9 98 S. Greenberg, M. Mohri / Statistics and Probability Letters 86 (204) 9 98 Then, we can write, for q = p, G(, p) = G(, q) ( q) = ( q) q =0 = ( q) t q t t = t= ( q) t= q t ( q) t q t = F(, q) = F(, p) > 4 with the inequality at the end being an application of Theore, which holds so long as q >, or equivalently, so long as p <. 4. Conclusion We presented a rigorous ustification of an inequality needed for the proof of relative deviations bounds in achine learning theory. A nuber of first attepts to find such a proof in the literature, or to prove this result ourselves, indicated that the inequality is not straightforward. Nevertheless, a sipler proof of this inequality is liely possible, and we ay present such a sipler result in the future. Acnowledgent We than Luc Devroye for discussions about the topic of this wor. References Anthony, M., Shawe-Taylor, J., 99. A result of Vapni with applications. Discrete Appl. Math. 47, Cortes, C., Mansour, Y., Mohri, M., 200. Learning Bounds for Iportance Weighting. MIT Press, Vancouver, Canada. Jaeger, S.A., Generalization bounds and coplexities based on sparsity and clustering for convex cobinations of functions fro rando classes. J. Mach. Learn. Res. 6, Johnson, N., Kep, A., Kotz, S., Univariate Discrete Distributions. In: Wiley Series in Probability and Statistics, vol.. Wiley & Sons. Johnson, N., Kotz, S., Balarishnan, N., 995. Continuous Univariate Distributions. In: Wiley Series in Probability and Matheatical Statistics: Applied Probability and Statistics, vol. 2. Wiley & Sons. Lesch, S.M., Jese, D.R., Soe suggestions for teaching about noral approxiations to Poisson and binoial distribution functions. Aer. Statist. 6, Rigollet, P., Tong, X., 20. Neyan Pearson classification, convexity and stochastic constraints. J. Mach. Learn. Res. 2, Vapni, V.N., 998. Statistical Learning Theory. Wiley-Interscience. Vapni, V.N., Estiation of Dependences Based on Epirical Data. Springer-Verlag.

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

1 Identical Parallel Machines

1 Identical Parallel Machines FB3: Matheatik/Inforatik Dr. Syaantak Das Winter 2017/18 Optiizing under Uncertainty Lecture Notes 3: Scheduling to Miniize Makespan In any standard scheduling proble, we are given a set of jobs J = {j

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

arxiv: v4 [cs.lg] 4 Apr 2016

arxiv: v4 [cs.lg] 4 Apr 2016 e-publication 3 3-5 Relative Deviation Learning Bounds and Generalization with Unbounded Loss Functions arxiv:35796v4 cslg 4 Apr 6 Corinna Cortes Google Research, 76 Ninth Avenue, New York, NY Spencer

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Rademacher Complexity Margin Bounds for Learning with a Large Number of Classes

Rademacher Complexity Margin Bounds for Learning with a Large Number of Classes Radeacher Coplexity Margin Bounds for Learning with a Large Nuber of Classes Vitaly Kuznetsov Courant Institute of Matheatical Sciences, 25 Mercer street, New York, NY, 002 Mehryar Mohri Courant Institute

More information

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term Nuerical Studies of a Nonlinear Heat Equation with Square Root Reaction Ter Ron Bucire, 1 Karl McMurtry, 1 Ronald E. Micens 2 1 Matheatics Departent, Occidental College, Los Angeles, California 90041 2

More information

Note on generating all subsets of a finite set with disjoint unions

Note on generating all subsets of a finite set with disjoint unions Note on generating all subsets of a finite set with disjoint unions David Ellis e-ail: dce27@ca.ac.uk Subitted: Dec 2, 2008; Accepted: May 12, 2009; Published: May 20, 2009 Matheatics Subject Classification:

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

A Note on Online Scheduling for Jobs with Arbitrary Release Times

A Note on Online Scheduling for Jobs with Arbitrary Release Times A Note on Online Scheduling for Jobs with Arbitrary Release Ties Jihuan Ding, and Guochuan Zhang College of Operations Research and Manageent Science, Qufu Noral University, Rizhao 7686, China dingjihuan@hotail.co

More information

On Certain C-Test Words for Free Groups

On Certain C-Test Words for Free Groups Journal of Algebra 247, 509 540 2002 doi:10.1006 jabr.2001.9001, available online at http: www.idealibrary.co on On Certain C-Test Words for Free Groups Donghi Lee Departent of Matheatics, Uni ersity of

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL PREPRINT 2006:7 Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL Departent of Matheatical Sciences Division of Matheatics CHALMERS UNIVERSITY OF TECHNOLOGY GÖTEBORG UNIVERSITY

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT

THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT THE AVERAGE NORM OF POLYNOMIALS OF FIXED HEIGHT PETER BORWEIN AND KWOK-KWONG STEPHEN CHOI Abstract. Let n be any integer and ( n ) X F n : a i z i : a i, ± i be the set of all polynoials of height and

More information

Testing Properties of Collections of Distributions

Testing Properties of Collections of Distributions Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the

More information

Estimating Entropy and Entropy Norm on Data Streams

Estimating Entropy and Entropy Norm on Data Streams Estiating Entropy and Entropy Nor on Data Streas Ait Chakrabarti 1, Khanh Do Ba 1, and S. Muthukrishnan 2 1 Departent of Coputer Science, Dartouth College, Hanover, NH 03755, USA 2 Departent of Coputer

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Algebraic Montgomery-Yang problem: the log del Pezzo surface case

Algebraic Montgomery-Yang problem: the log del Pezzo surface case c 2014 The Matheatical Society of Japan J. Math. Soc. Japan Vol. 66, No. 4 (2014) pp. 1073 1089 doi: 10.2969/jsj/06641073 Algebraic Montgoery-Yang proble: the log del Pezzo surface case By DongSeon Hwang

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Statistics and Probability Letters

Statistics and Probability Letters Statistics and Probability Letters 79 2009 223 233 Contents lists available at ScienceDirect Statistics and Probability Letters journal hoepage: www.elsevier.co/locate/stapro A CLT for a one-diensional

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

ESE 523 Information Theory

ESE 523 Information Theory ESE 53 Inforation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electrical and Systes Engineering Washington University 11 Urbauer Hall 10E Green Hall 314-935-4173 (Lynda Marha Answers) jao@wustl.edu

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Support Vector Machines. Maximizing the Margin

Support Vector Machines. Maximizing the Margin Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the

More information

1 Generalization bounds based on Rademacher complexity

1 Generalization bounds based on Rademacher complexity COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #0 Scribe: Suqi Liu March 07, 08 Last tie we started proving this very general result about how quickly the epirical average converges

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Collision-based Testers are Optimal for Uniformity and Closeness

Collision-based Testers are Optimal for Uniformity and Closeness Electronic Colloquiu on Coputational Coplexity, Report No. 178 (016) Collision-based Testers are Optial for Unifority and Closeness Ilias Diakonikolas Theis Gouleakis John Peebles Eric Price USC MIT MIT

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Sampling How Big a Sample?

Sampling How Big a Sample? C. G. G. Aitken, 1 Ph.D. Sapling How Big a Saple? REFERENCE: Aitken CGG. Sapling how big a saple? J Forensic Sci 1999;44(4):750 760. ABSTRACT: It is thought that, in a consignent of discrete units, a certain

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

1 Rademacher Complexity Bounds

1 Rademacher Complexity Bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #10 Scribe: Max Goer March 07, 2013 1 Radeacher Coplexity Bounds Recall the following theore fro last lecture: Theore 1. With probability

More information

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions

Tight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

AN EFFICIENT CLASS OF CHAIN ESTIMATORS OF POPULATION VARIANCE UNDER SUB-SAMPLING SCHEME

AN EFFICIENT CLASS OF CHAIN ESTIMATORS OF POPULATION VARIANCE UNDER SUB-SAMPLING SCHEME J. Japan Statist. Soc. Vol. 35 No. 005 73 86 AN EFFICIENT CLASS OF CHAIN ESTIMATORS OF POPULATION VARIANCE UNDER SUB-SAMPLING SCHEME H. S. Jhajj*, M. K. Shara* and Lovleen Kuar Grover** For estiating the

More information

Robustness and Regularization of Support Vector Machines

Robustness and Regularization of Support Vector Machines Robustness and Regularization of Support Vector Machines Huan Xu ECE, McGill University Montreal, QC, Canada xuhuan@ci.cgill.ca Constantine Caraanis ECE, The University of Texas at Austin Austin, TX, USA

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Perturbation on Polynomials

Perturbation on Polynomials Perturbation on Polynoials Isaila Diouf 1, Babacar Diakhaté 1 & Abdoul O Watt 2 1 Départeent Maths-Infos, Université Cheikh Anta Diop, Dakar, Senegal Journal of Matheatics Research; Vol 5, No 3; 2013 ISSN

More information

Stability Bounds for Non-i.i.d. Processes

Stability Bounds for Non-i.i.d. Processes tability Bounds for Non-i.i.d. Processes Mehryar Mohri Courant Institute of Matheatical ciences and Google Research 25 Mercer treet New York, NY 002 ohri@cis.nyu.edu Afshin Rostaiadeh Departent of Coputer

More information

ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS

ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS #A34 INTEGERS 17 (017) ORIGAMI CONSTRUCTIONS OF RINGS OF INTEGERS OF IMAGINARY QUADRATIC FIELDS Jürgen Kritschgau Departent of Matheatics, Iowa State University, Aes, Iowa jkritsch@iastateedu Adriana Salerno

More information

Moments of the product and ratio of two correlated chi-square variables

Moments of the product and ratio of two correlated chi-square variables Stat Papers 009 50:581 59 DOI 10.1007/s0036-007-0105-0 REGULAR ARTICLE Moents of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: June 006 / Revised: 8 October 007

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison

More information

arxiv: v1 [math.co] 19 Apr 2017

arxiv: v1 [math.co] 19 Apr 2017 PROOF OF CHAPOTON S CONJECTURE ON NEWTON POLYTOPES OF q-ehrhart POLYNOMIALS arxiv:1704.0561v1 [ath.co] 19 Apr 017 JANG SOO KIM AND U-KEUN SONG Abstract. Recently, Chapoton found a q-analog of Ehrhart polynoials,

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

VC Dimension and Sauer s Lemma

VC Dimension and Sauer s Lemma CMSC 35900 (Spring 2008) Learning Theory Lecture: VC Diension and Sauer s Lea Instructors: Sha Kakade and Abuj Tewari Radeacher Averages and Growth Function Theore Let F be a class of ±-valued functions

More information

STOPPING SIMULATED PATHS EARLY

STOPPING SIMULATED PATHS EARLY Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,

More information

On the Dirichlet Convolution of Completely Additive Functions

On the Dirichlet Convolution of Completely Additive Functions 1 3 47 6 3 11 Journal of Integer Sequences, Vol. 17 014, Article 14.8.7 On the Dirichlet Convolution of Copletely Additive Functions Isao Kiuchi and Makoto Minaide Departent of Matheatical Sciences Yaaguchi

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Will Monroe August 9, with materials by Mehran Sahami and Chris Piech. image: Arito. Parameter learning

Will Monroe August 9, with materials by Mehran Sahami and Chris Piech. image: Arito. Parameter learning Will Monroe August 9, 07 with aterials by Mehran Sahai and Chris Piech iage: Arito Paraeter learning Announceent: Proble Set #6 Goes out tonight. Due the last day of class, Wednesday, August 6 (before

More information

AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE HERMITE HEAT EQUATION

AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE HERMITE HEAT EQUATION Counications on Stochastic Analysis Vol. 6, No. 3 (1) 43-47 Serials Publications www.serialspublications.co AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE HERMITE HEAT EQUATION BISHNU PRASAD DHUNGANA Abstract.

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

An EGZ generalization for 5 colors

An EGZ generalization for 5 colors An EGZ generalization for 5 colors David Grynkiewicz and Andrew Schultz July 6, 00 Abstract Let g zs(, k) (g zs(, k + 1)) be the inial integer such that any coloring of the integers fro U U k 1,..., g

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information