Discrete Moment Problem with the Given Shape of the Distribution

Similar documents
SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION

Maximization of a Strongly Unimodal Multivariate Discrete Distribution

On Strong Unimodality of Multivariate Discrete Distributions

A CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a

Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities

A Method of Disaggregation for. Bounding Probabilities of. Boolean Functions of Events

Solution of Probabilistic Constrained Stochastic Programming Problems with Poisson, Binomial and Geometric Random Variables

Studia Scientiarum Mathematicarum Hungarica 42 (2), (2005) Communicated by D. Miklós

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b

Probabilities of Boolean. Functions of Events

c 2009 Society for Industrial and Applied Mathematics

Single commodity stochastic network design under probabilistic constraint with discrete random variables

R u t c o r Research R e p o r t. Empirical Analysis of Polynomial Bases on the Numerical Solution of the Multivariate Discrete Moment Problem

Polynomially Computable Bounds for the Probability of the Union of Events

On lower and upper bounds for probabilities of unions and the Borel Cantelli lemma

R u t c o r Research R e p o r t. The Optimization of the Move of Robot Arm by Benders Decomposition. Zsolt Robotka a RRR , DECEMBER 2005

APPLICATION OF THE DISCRETE MOMENT PROBLEM FOR NUMERICAL INTEGRATION AND SOLUTION OF A SPECIAL TYPE OF MOMENT PROBLEMS

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Remarks on multifunction-based. dynamical systems. Bela Vizvari c. RRR , July, 2001

R u t c o r Research R e p o r t. Uniform partitions and Erdös-Ko-Rado Theorem a. Vladimir Gurvich b. RRR , August, 2009

A Stochastic Programming Based Analysis of the Field Use in a Farm

Radical Endomorphisms of Decomposable Modules

An Intersection Inequality for Discrete Distributions and Related Generation Problems

ON DOMINATION IN CUBIC GRAPHS

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

ORF 522. Linear Programming and Convex Analysis

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture #8: We now take up the concept of dynamic programming again using examples.

Lecture 8: Column Generation

Stochastic Design Criteria in Linear Models

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

R u t c o r Research R e p o r t. Relations of Threshold and k-interval Boolean Functions. David Kronus a. RRR , April 2008

Received: 2/7/07, Revised: 5/25/07, Accepted: 6/25/07, Published: 7/20/07 Abstract

R u t c o r Research R e p o r t. A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS.

ON THE ERDOS-STONE THEOREM

Endre Boros b Vladimir Gurvich d ;

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

arxiv: v1 [math.pr] 13 Jul 2011

Estimates for probabilities of independent events and infinite series

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

6. Linear Programming

GORENSTEIN HILBERT COEFFICIENTS. Sabine El Khoury. Hema Srinivasan. 1. Introduction. = (d 1)! xd ( 1) d 1 e d 1

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Relation of Pure Minimum Cost Flow Model to Linear Programming

R u t c o r Research R e p o r t. A New Imputation Method for Incomplete Binary Data. Mine Subasi a Martin Anthony c. Ersoy Subasi b P.L.

LOCAL TIMES OF RANKED CONTINUOUS SEMIMARTINGALES

Asymptotics of Integrals of. Hermite Polynomials

R u t c o r Research R e p o r t. The Branch and Bound Method

Solutions to Exercises

Doubly Indexed Infinite Series

On the Chvatál-Complexity of Knapsack Problems

Maximum k-regular induced subgraphs

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Quantization. Robert M. Haralick. Computer Science, Graduate Center City University of New York

Chapter 1. Sets and probability. 1.3 Probability space

1. Prove the following properties satisfied by the gamma function: 4 n n!

A lower bound for discounting algorithms solving two-person zero-sum limit average payoff stochastic games

OQ4867. Let ABC be a triangle and AA 1 BB 1 CC 1 = {M} where A 1 BC, B 1 CA, C 1 AB. Determine all points M for which ana 1...

MATH 445/545 Test 1 Spring 2016

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

STRATEGIC EQUILIBRIUM VS. GLOBAL OPTIMUM FOR A PAIR OF COMPETING SERVERS

Partitions and the FermiDirac Distribution

Slow k-nim. Vladimir Gurvich a

MATH3283W LECTURE NOTES: WEEK 6 = 5 13, = 2 5, 1 13

arxiv: v2 [cs.ds] 27 Nov 2014

R u t c o r Research R e p o r t. Minimal and locally minimal games and game forms. 1. Endre Boros 2 Vladimir Gurvich 3 Kazuhisa Makino 4

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

MATH 117 LECTURE NOTES

On the Sprague-Grundy function of exact k-nim

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

15-780: LinearProgramming

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Lecture 5: Moment generating functions

Continuous random variables

A Few Special Distributions and Their Properties

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

A Characterization of (3+1)-Free Posets

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Integer Programming, Constraint Programming, and their Combination

Optimization Problems with Probabilistic Constraints

Linear and Integer Programming - ideas

On the positivity of linear weights in WENO approximations. Abstract

Lecture 9: Dantzig-Wolfe Decomposition

arxiv: v3 [math.cv] 5 Sep 2018

Gross Substitutes Tutorial

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Combinatorial Data Mining Method for Multi-Portfolio Stochastic Asset Allocation

Efficient approximation algorithms for the Subset-Sums Equality problem

On the complexity of approximate multivariate integration

LOWELL. MICHIGAN, THURSDAY, AUGUST 16, Specialty Co. Sells to Hudson Mfg. Company. Ada Farmer Dies When Boat Tips

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

2 Chance constrained programming

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Continuous Random Variables

An introductory example

Transcription:

R u t c o r Research R e p o r t Discrete Moment Problem with the Given Shape of the Distribution Ersoy Subasi a Mine Subasi b András Prékopa c RRR 41-005, DECEMBER, 005 RUTCOR Rutgers Center for Operations Research Rutgers University 640 Bartholomew Road Piscataway, New Jersey 08854-8003 Telephone: 73-445-3804 Telefax: 73-445-547 Email: rrr@rutcor.rutgers.edu http://rutcor.rutgers.edu/ rrr a RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew Road Piscataway, NJ 08854-8003, USA. Email: esub@rutcor.rutgers.edu b RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew Road Piscataway, NJ 08854-8003, USA. Email: msub@rutcor.rutgers.edu c RUTCOR, Rutgers Center for Operations Research, 640 Bartholomew Road Piscataway, NJ 08854-8003, USA. Email: prekopa@rutcor.rutgers.edu

Rutcor Research Report RRR 41-005, DECEMBER, 005 Discrete Moment Problem with the Given Shape of the Distribution Ersoy Subasi Mine Subasi András Prékopa Abstract. Discrete moment problems with finite, preassigned supports and given shapes of the distributions are formulated and used to obtain sharp upper and lower bounds for expectations of convex functions of discrete random variables as well as probabilities that at least one out of n events occurs. The bounds are based on the knowledge of some of the power moments of the random variables involved, or the binomial moments of the number of events which occur. The bounding problems are formulated as LP s and dual feasible basis structure theorems as well as the application of the dual method provide us with the results. Applications in PERT and reliability are presented. Key words: Power Moment Problem, Binomial Moment Problem, Linear Programming, Logconcavity.

Page RRR 41-005 1 Introduction Let ξ be a random variable, the possible values of which are known to be the nonnegative numbers z 0 < z 1 <... < z n. Let p i = P (ξ = z i, i = 0, 1,..., n. Suppose that these probabilities are unknown [( but either ] the power moments µ k = E(ξ k, k = 1,..., m or the ξ binomial moments S k = E, k = 1,..., m, where m < n are known. k The starting points of our investigation are the following linear programming problems min(max{f(z 0 p 0 + f(z 1 p 1 +... + f(z n p n } p 0 + p 1 +... + p n = 1 z 0 p 0 + z 1 p 1 +... + z n p n = µ 1 z 0p 0 + z 1p 1 +... + z np n = µ (1.1. z m 0 p 0 + z m 1 p 1 +... + z m n p n = µ m p 0 0, p 1 0,..., p n 0, min(max{f(z 0 p 0 + f(z 1 p 1 +... + f(z n p n } p 0 + p 1 +... + p n = 1 ( z0 z 0 p 0 + z 1 p 1 +... + z n p n = S 1 ( ( z1 zn p 1 +... + p 0 + p n = S (1. ( z0 m ( z1 p 0 + m. p 1 +... + ( zn m p 0 0, p 1 0,..., p n 0. p n = S m Problems (1.1 and (1. are called the power and binomial moment problems, respectively and have been studied extensively in Prékopa (1988; 1989a, b; 1990, Boros and Prékopa (1989. The two problems can be transformed into each other by the use of a simple linear transformation (see Prékopa, 1995, Section 5.6. Let A; a 0, a 1,..., a n and b denote the matrix of the equality constraints in either problem, its columns and the vector of the right-hand side values. We will alternatively use the notation f k instead of f(z k. In this paper we specialize problems (1.1 and (1. in the following manner.

RRR 41-005 Page 3 (1 In case of problem (1.1 we assume that the function f has positive divided differences of order m + 1, where m is some fixed nonnegative integer satisfying 0 m n. The optimum values of problem (1.1 provides us with sharp lower and upper bounds for E[f(ξ]. ( In case of problem (1. we assume that z i = i, i = 0,..., n and f 0 = 0, f i = 1, i = 1,..., n. The problem can be used in connection with arbitrary events A 1,..., A n, to obtain sharp lower and upper bounds for the probability of the union. In fact if we define S 0 = 1 and S k = P (A i1...a ik, k = 1,..., n, 1 i 1 <...<i k n then by a well-known theorem (see, e.g., Prékopa, 1995 we have the equation [( ] ξ S k = E, k = 1,..., n, (1.3 k where ξ is the number of those events which occur. The equality constraints in (1. are just the same as S 0 = 1 and the equations in (1.3 for k = 1,..., m and the objective function is the probability of ξ 1 under the distribution p 0,..., p n. The distribution, however, is allowed to vary the constraints, hence the optimum value of problem (1. provide us with the best possible bounds for the probability P (ξ 1, given S 1,..., S m. For small m values (m 4 closed form bounds are presented in the literature. For power moment bounds see Prékopa (1990, 1995. Bounds for the probability of the union have been obtained by Fréchet (1940, 1943 when m = 1, Dawson and Sankoff (1967 when m =, Kwerel (1975 when m 3, Boros and Prékopa (1989 when m 4. In the last two paper bounds for the probability that at least r events occur, are also presented. For other closed form probability bounds see Prékopa (1995, Galambos and Simonelli (1996. Prékopa (1988, 1989, 1990 discovered that the probability bounds based on the binomial and power moments of the number of events that occur, out of a given collection A 1,..., A n, can be obtained as optimum values of discrete moment problems (DMP and showed that for arbitrary m values simple dual algorithms solve problems (1.1 and (1. if f is of type (1 or ( (and if f r = 1, f k = 0, k r. In this paper we formulate and use moment problems with finite, preassigned support and with given shape of the probability distribution to obtain sharp lower and upper bounds for unknown probabilities and expectations of higher order convex functions of discrete random variables. We assume that the probability distribution {p i } is either decreasing (Type 1 or increasing (Type or unimodal with a known modus (Type 3. The reasoning goes along the lines presented in above cited papers by Prékopa. In Section some basic notions and theorems are given. In Section 3 and 4 we use the dual feasible basis structure theorems (Prékopa, 1988, 1989, 1990 to obtain sharp bounds for

Page 4 RRR 41-005 E[f(ξ] and P (ξ r in case of problems, where the first two moments are known. In Section 5 we give numerical examples to compare the sharp bounds obtained by the original binomial moment problem and the sharp bounds obtained by the transformed problems: Type 1, Type and Type 3. In Section 6 we present three examples for the application of our bounding technique, where shape information about the unknown probability distribution can be used. Basic Notions and Theorems Let f be a function on the discrete set Z = {z 0,..., z n }, divided differences of f are defined by z 0 < z 1 <... < z n. The first order [z i, z i+1 ]f = f(z i+1 f(z i z i+1 z i, i = 0, 1,..., n 1. The k th order divided differences are defined recursively by [z i,..., z i+k ]f = [z i+1,..., z i+k ]f [z i,..., z i+k 1 ]f z i+1 z i, k. The function f is said to be kth order convex if all of its kth divided differences are positive. Theorem 1(Prékopa, 1988, 1989a,1989b, 1990 Suppose that all (m+1st divided differences of the function f(z, z {z 0, z 1,..., z n } are positive. Then, in problems (1.1 and (1., all bases are dual nondegenerate and the dual feasible bases have the following structures, presented in terms of the subscripts of the basic vectors: m + 1 even m + 1 odd min problem {j, j + 1,..., k, k + 1} {0, j, j + 1,..., k, k + 1} max problem {0, j, j + 1,..., k, k + 1, n} {j, j + 1,..., k, k + 1, n}, where in all parentheses the numbers are arranged in increasing order. The following theorem is a conclusion of the dual feasible basis structure theorem by Prékopa (1988, 1989a, 1989b, 1990 where r = 1. Theorem Suppose that f 0 = 0, f i = 1, i = 1,..., n. Then every dual feasible basis subscript set has one of the following structures: minimization problem, m + 1 even {0, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1, n}; minimization problem, m + 1 odd {0, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1}; maximization problem, m + 1 even I {1,..., n}, if n 1 m, {1, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1, n}, if 1 n 1, {0, 1, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1}, if 1 n;

RRR 41-005 Page 5 maximization problem, m + 1 even I {1,..., n}, if n 1 m, {1, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1}, if 1 n, {0, 1, i, i + 1,..., j, j + 1, k, k + 1,..., t, t + 1, n}, if 1 n 1, where in all parentheses the numbers are arranged in increasing order. Those bases for which I {1,..., n} are dual nondegenerate in the maximization problem if n > m + 1. The bases in all other cases are dual nondegenerate. 3 The Case of the Power Moment Problem In this section we consider the power moment problem (1.1. We assume that the distribution is either decreasing or increasing or unimodal with a known and fixed modus. We give sharp lower and upper bounds for E[f(ξ] in case of three problem types: the probabilities p 0,..., p n are (1 decreasing, ( increasing; (3 form a unimodal sequence. 3.1 TYPE 1: p 0 p 1... p n We assume that the probabilities p 0,..., p n are unknown but satisfy the above inequalities and the divided differences of order m + 1 of the function f are positive. Let us introduce the variables v i, i = 0, 1,..., n as follows: p 0 p 1 = v 0,..., p n 1 p n = v n 1, p n = v n. Our assumptions imply v 0,..., v n 0. If we write up problem (1.1 by the use of v 0,..., v n, then we obtain min(max{f 0 v 0 + (f 0 + f 1 v 1 +... + (f 0 +... + f n v n } a 0 v 0 + (a 0 + a 1 v 1 +... + (a 0 +... + a n v n = b (3.1 v 0 0, v 1 0,..., v n 0, where a i = (1, z i,..., zi m T, i = 0,..., n and b = (1, µ 1,..., µ m T. Let A be the (m + 1 n coefficient matrix. In order to use Theorem 1, ( we need to f T show that all minors of order m + 1 from A and all minors of order m + from are A positive, where f = (f 0, f 0 + f 1,..., f 0 +... + f n T. To show that the first assertion is true we consider the (m + 1 (m + 1 determinant taken from A: a 0 +... + a i1 a 0 +... + a i... a 0 +... + a im+1, (3. where 0 i 1 <... < i m+1 n. One can easily show that the determinant (3. is equal to a 0 +... + a i1 a i1 +1 +... + a i... a im+1 +... + a im+1

Page 6 RRR 41-005 i 1 + 1 i i 1 1... i m+1 i m 1 z 0 +... + z i1 z i1 +1 +... + z i... z im+1 +... + z im+1 =..... (3.3.. z0 m +... + zi m 1 zi m 1 +1 +... + zi m... zi m m+1 +... + zi m m+1 The determinant (3.43 can be written as a sum of nonnegative determinants. Notice that some of these terms are Vandermonde determinants. Thus, any (m+1 (m+1 determinant taken from the matrix of the equality constraints is positive. ( f T Similarly, the following (m + (m + determinants taken from are positive, A where f is the same as before: f 0 +... + f i1 f 0 +... + f i... f 0 +... + f im+1 a 0 +... + a i1 a 0 +... + a i... a 0 +... + a im+1 (3.4 and this is same as f 0 +... + f i1 f i1 +1 +... + f i... f im+1 +... + f im+1 a 0 +... + a i1 a i1 +1 +... + a i... a im+1 +... + a im+1 = f 0 +... + f i1 f i1 +1 +... + f i... f im+1 +... + f im+1 i 1 + 1 i i 1 1... i m+1 i m 1 z 0 +... + z i1 z i1 +1 +... + z i........ z im+1 +... + z im+1. z0 m +... + zi m 1 zi m 1 +1 +... + zi m... zi m m+1 +... + zi m m+1. (3.5 Since f has positive divided differences of order m + 1, the determinant (3.5 is positive. Thus, we can use Theorem 1 to find the dual feasible bases in problem (3.1. Below we present, by the use of Theorem 1, the sharp lower and upper bounds for E[f(ξ] for the case of m = 1 and m =, respectively. Case 1. m = 1 If m = 1, then the LP in (1.1 is equivalent to min(max{f 0 v 0 + (f 0 + f 1 v 1 +... + (f 0 +... + f n v n } v 0 + v 1 +... + (n + 1v n = 1 z 0 v 0 + (z 0 + z 1 v 1 +... + (z 0 +... + z n v n = µ 1 (3.6 v 0 0, v 1 0,..., v n 0. Let A be the coefficient matrix of the equality constraints in (3.6. Since m + 1 is even, by Theorem1, the dual feasible basis of the minimization problem in (3.8, that we designate by B min, has the following form: B min = (a j, a j+1,

RRR 41-005 Page 7 where 0 j n 1. Similarly, by Theorem 1, the only dual feasible basis of the maximization problem in (3.8, that we designate by B max, is given by B max = (a 0, a n. First we notice that B min is also primal feasible if the following conditions are satisfied: (j + 1v j + (j + v j+1 = 1 (z 0 +... + z j v j + (z 0 +... + z j+1 v j+1 = µ 1 v j 0, v j+1 0. Therefore, by solving the equations given above, B min is an optimal basis if the index j is determined by the inequality z 0 +... + z j j + 1 µ 1 z 0 +... + z j+1 j +. (3.7 In the maximization problem (3.6 there is just one dual feasible basis. It follows that it must also be primal feasible. In case of m = 1 the bounding of E[f(ξ] is based on the knowledge of µ 1. Using the optimal bases B min and B max, we obtain the following sharp lower and upper bounds for E[f(ξ]: j+1 i=0 z i (j + µ 1 (j + 1z j+1 j i=0 z [f 0 +... + f j ] + (j + 1µ 1 j i=0 z i i (j + 1z j+1 j i=0 z [f 0 +... + f j+1 ] i E[f(ξ] (n + 1µ 1 n i=0 z i (n + 1z 0 µ 1 z 0 n i=0 z f 0 + i (n + 1z 0 n i=0 z [f 0 +... + f n ], (3.8 i where j satisfies (3.7. Case. m = If m =, then the third order divided differences of f are positive. The bounding of E[f(ξ] is based on the knowledge of µ 1 and µ. In case of m =, problem (1.1 is equivalent to the following problem: min(max{f 0 v 0 + (f 0 + f 1 v 1 +... + (f 0 +... + f n v n } v 0 + v 1 +... + (n + 1v n = 1 z 0 v 0 + (z 0 + z 1 v 1 +... + (z 0 +... + z n v n = µ 1 (3.9 z 0v 0 + (z 0 + z 1v 1 +... + (z 0 +... + z nv n = µ

Page 8 RRR 41-005 v 0 0, v 1 0,..., v n 0. Let A be the coefficient matrix of the equality constraints in (3.9. Since m + 1 is odd, any dual feasible basis in the minimization (maximization problem is of the form: Let us introduce the notations: B min = (a 0, a i, a i+1 (B max = (a j, a j+1, a n. Σ i,j = i j i 1 zt (j i + 1 t=i t=0 z t Σ i,j = i σ i,j = σ i,j = γ i,j = γ i,j = α i,j = (n j α i,j = (n j j i 1 z t (j i + 1 t=i t=0 j zt (j i + 1zi 1 t=i j z t (j i + 1z i 1 t=i j z t (j i + 1zj+1 t=i j z t (j i + 1z j+1 t=i j zt (j i + 1 t=i j z t (j i + 1 t=i z t n t=j+1 n t=j+1 The basis B min is also primal feasible if i is determined by the inequality: z t z t. (3.10 i+1 t=1 z t iz0 i t=1 z t iz 0 i t=1 z µ z0 (i + 1z 0 t µ 1 z 0 (i + 1z 0 i+1 t=1 z t Similarly, the basis B max is also primal feasible if j is determined by the inequality Σ j+1,n n s=0 z s (n + 1µ Σ j+1,n (n + 1µ 1 s=0 z s. (3.11 Σ j+,n Σ j+,n. (3.1 Using the bases B min and B max we have the following lower and upper bounds for E[f(ξ]:

RRR 41-005 Page 9 (µ 1 z 0 σ 1,i+1 (µ z 0σ 1,i+1 σ 1,i+1 σ 1,i σ 1,i+1 σ 1,i ( i t=1 f t if 0 + (µ z 0σ 1,i (µ 1 z 0 σ 1,i σ 1,i+1 σ 1,i σ 1,i+1 σ 1,i f t (i + 1f 0 i+1 ( t=1 E[f(ξ] (3.13 Σ j+,n [(n + 1µ 1 n s=0 z s] Σ j+,n [(n + 1µ n Σ j+1,n Σ j+,n Σ j+,nσ j+1,n s=0 z s] ( n j s=0 f s n n + 1 s=j+1 f s + Σ j+1,n [(n + 1µ n s=0 z s] Σ j+1,n [(n + 1µ 1 n Σ j+1,n Σ j+,n Σ j+,nσ j+1,n s=0 z s] ( n j+1 s=0 f s n n + 1 s=j+ f s The above lower and upper bounds are sharp, because they are the optimum values of minimization and maximization problems (3.9, respectively.. 3. TYPE : p 0 p 1... p n We assume that the above inequalities are satisfied and f has positive divided differences of order m + 1. If we introduce the new variables v i, i = 0, 1..., n as follows: then problem (1.1 can be written as p 0 = v 0, p 1 p 0 = v 1,..., p n p n 1 = v n, min(max{(f 0 +... + f n v 0 + (f 1 +... + f n v 1 +... + f n v n } (a 0 +... + a n v 0 + (a 1 +... + a n v 1 +... + a n v n = b (3.14 v 0 0, v 1 0,..., v n 0. ( f T All minors of order m + 1 from A and all minors of order m + from are positive, A where A is the coefficient matrix of the equality constraints in problem (3.14 and f = (f 0 +...+f n, f 1 +...+f n,..., f n T. In order to show that the first assertion is true we consider the following (m + 1 (m + 1 determinant taken from A (0 i 1,..., i n+1 n: a i1 +... + a n a i +... + a n... a im+1 +... + a n (3.15 One can easily show that the determinant (3.15 is equal to a i1 +... + a i 1 a i +... + a i3 1... a im+1 +... + a n

Page 10 RRR 41-005 = i i 1 i 3 i... i n i m+1 1 z i1 +... + z i 1 z i +... + z i3 1... z im+1 +... + z n...... zi m 1 +... + zi m 1 zi m +... + zi m 3 1... zi m m+1 +... + zn m. (3.16 The determinant (3.16 can be written as a sum of nonnegative determinants. Notice that some of the terms are Vandermonde determinants. Thus any (m + 1 (m + 1 determinant taken from the matrix of the equality constraints is positive. ( We can handle, in a similar way, the following (m + (m + determinant taken from f T : A f i 1 +... + f n f i +... + f n... f im+1 +... + f n a i1 +... + a n a i +... + a n... a im+1 +... + a n. (3.17 It is equal to f i 1 +... + f i 1 f i +... + f i3 1... f im+1 +... + f n a i1 +... + a i 1 a i +... + a i3 1... a im+1 +... + a n f i1 +... + f i 1 f i +... + f i3 1... f im+1 +... + f n i i 1 i 3 i... i n i m+1 1 = z i1 +... + z i 1 z i +... + z i3 1... z im+1 +... + z n. (3.18...... zi m 1 +... + zi m 1 zi m +... + zi m 3 1... zi m m+1 +... + zn m Since f has positive divided differences of order m + 1, the determinant (3.18 is positive. Thus, we can use Theorem 1 to find the dual feasible bases. We give the sharp lower and upper bounds for E[f(ξ] in case of m = 1 and m =. Case 1. m = 1 If m = 1, then problem (3.14 is equivalent to min(max{(f 0 +... + f n v 0 + (f 1 +... + f n v 1 +... + f n v n } (n + 1v 0 + nv 1 +... + v n = 1 (z 0 +... + z n v 0 + (z 1 +... + z n v 1 +... + z n v n = µ 1 (3.19 v 0 0, v 1 0,..., v n 0. Let A be the coefficient matrix of the equality constraints in (3.19. Since m + 1 is even, by Theorem 1, the dual feasible bases in the minimization and maximization problems in (3.19 are B min = (a j, a j+1, B max = (a 0, v n, respectively, where 0 j n 1.

RRR 41-005 Page 11 The basis B min is also primal feasible if j is determined by the inequality z j +... + z n n j + 1 µ 1 z j+1 +... + z n n j. (3.0 The basis B max is the only dual feasible basis, hence it must also be primal feasible. In case of m = 1 the bounds of E[f(ξ] are based on the knowledge of µ 1 and we get the following sharp lower and upper bounds for E[f(ξ]: n t=j+1 z t (n jµ 1 [f j +... + f n ] σ j+1,n n t=j z t (n j + 1µ 1 σ j+1,n [f j+1 +... + f n ] E[f(ξ] µ 1 z n [f 0 +... + f n ] + (n + 1µ 1 n t=0 z t f n. (3.1 γ 0,n 1 γ 0,n 1 The bounds are sharp if j satisfies (3.0. Case. m = If m =, then the third order divided differences of f are positive and problem (3.14 is equivalent to min(max{(f 0 +... + f n v 0 + (f 1 +... + f n v 1 +... + f n v n } (n + 1v 0 + nv 1 +... + v n = 1 (z 0 +... + z n v 0 + (z 1 +... + z n v 1 +... + z n v n = µ 1 (3. (z 0 +... + z nv 0 + (z 1 +... + z nv 1 +... + z nv n = µ v 0 0, v 1 0,..., v n 0. The bounding of E[f(ξ] is based on the knowledge of µ 1 and µ. Let A be the coefficient matrix of the equality constraints in (3.. Since m + 1 is odd, any dual feasible bases in the minimization and maximization problems are of the form where 1 i n 1, 0 j n. B min is also primal feasible if B min = (a 0, a i, a i+1 }, B max = (a j, a j+1, a n, Σ i,n and B max is also primal feasible if n t=0 z t (n + 1µ Σ i,n (n + 1µ 1 n t=0 z t Σ i+1,n Σ i+1,n (3.3 γ j,n 1 γ j,n 1 µ z n µ 1 z n γ j+1,n 1 γ j+1,n 1. (3.4

Page 1 RRR 41-005 It follows that we have the following lower and upper bounds for E[f(ξ]: [(n + 1µ 1 n t=0 z t]σ i+1,n [(n + 1µ [ n t=0 z t ]Σ i+1,n Σ i,n Σ i+1,n Σ i,n Σ f i +... + f n (n i + 1 i+1,n + [(n + 1µ n t=0 z t ]Σ i,n [(n + 1µ 1 n t=0 z [ t]σ i,n Σ i,n Σ i+1,n Σ i,n Σ f i+1 +... + f n (n i i+1,n ] n f t t=0 ] n f t t=0 E[f(ξ] (3.5 [ (µ 1 z n γj+1,n 1 (µ znγ n ] j+1,n 1 γ j,n 1 γj+1,n 1 γ f j+1,n 1γj,n 1 s (n j + 1f n s=j + (µ [ znγ j,n 1 (µ 1 z n γj,n 1 n ] γ j,n 1 γj+1,n 1 γ f j+1,n 1γj,n 1 s (n jf n, s=j+1 where Σ i,j, Σ i,j, γ i,j, γ i,j are defined in (3.10 and i satisfies (3.3 and j satisfies (3.4. 3.3 TYPE 3: p 0 p 1... p k p k+1... p n Now we assume that the distribution is unimodal with a known modus z k, i.e., p 0 p 1... p k p k+1... p n. We also assume that f has positive divided differences of order m + 1. Let us introduce the variables v i, i = 0, 1,..., k,...n as follows: v 0 = p 0, v 1 = p 1 p 0,..., v k = p k p k 1, v k+1 = p k+1 p k+,..., v n 1 = p n 1 p n, v n = p n. Then the problem (1.1 can be written as min(max{(f 0 +... + f k v 0 + (f 1 +... + f k v 1 +... + f k v k + f k+1 v k+1 +... + (f k+1 +... + f n v n } (a 0 +... + a k v 0 + (a 1 +... + a k v 1 +... + a k v k + a k+1 v k+1 +... + (a k+1 +... + a n v n = b v 0 0, v 1 0,..., v n 0. (3.6 Let A be the coefficient matrix of the equality constraints of problem (3.6. One can easily show, by the method that we applied in Section 3.1 and 3., that all minors of order

RRR 41-005 Page 13 ( f T m + 1 from A and all minors of order m + from are positive, where f = (f A 0 +... + f k, f 1 +... + f k,..., f k, f k+1,..., f k+1 +... + f n T. Thus, if the distribution is unimodal and we know where it takes the largest value, then we can find the dual feasible bases for problem (1.1. This allows for obtaining the bounding formulas for small m values. Case 1. m = 1 If m = 1, then the problem in (3.6 is equivalent to the problem: min(max{(f 0 +... + f k v 0 + (f 1 +... + f k v 1 +... + f k v k + f k+1 v k+1 +... + (f k+1 +... + f n v n } (k + 1v 0 + kv 1 +... + v k + v k+1 + v k+ +... + (n kv n = 1 (z 0 +... + z k v 0 + (z 1 +... + z k v 1 +... + z k v k + z k+1 v k+1 +... + (z k+1 + z n v n = µ 1 v 0 0, v 1 0,..., v n 0. (3.7 Let A be the coefficient matrix of the equality constraints in (3.7. Since m + 1 is even, by the use of Theorem 1, a dual feasible basis of the minimization problem in (3.7 is in the form B min = (a j, a j+1, where 0 j n 1. The only dual feasible basis of the maximization problem (3.7 is B min is also primal feasible if B max = (a 0, a n. k t=j z t k j + 1 µ 1 k t=j+1 z t k j if j + 1 k (3.8 j t=k+1 z t j k µ 1 j+1 t=k+1 z t j k + 1 if j k + 1 (3.9 z k µ 1 z k+1 if j = k. (3.30 If j + 1 k and (3.8 is satisfied, then the sharp lower and upper bounds for E[f(ξ] are as follows: k t=j+1 z t (k jµ 1 (k j + 1 k t=j+1 z t (k j k t=j z t k f t + t=j (k j + 1µ 1 k t=j z t (k j + 1 k t=j+1 z t (k j k t=j z t k t=j+1 f t

Page 14 RRR 41-005 n t=k+1 z t (n kµ 1 Σ k+1,n E[f(ξ] (3.31 k t=0 f t + (k + 1µ 1 k t=0 z t Σ k+1,n n t=k+1 If j k + 1 and (3.9 is satisfied, then the sharp lower and upper bounds for E[f(ξ] are as follows: f t. j+1 t=k+1 z t (j k + 1µ 1 (j k j+1 t=k+1 z t (j k + 1 j t=k+1 z t n t=k+1 z t (n kµ 1 Σ k+1,n j t=k+1 f t + (j kµ 1 j t=k+1 z t (j k j+1 t=k+1 z t (j k + 1 j t=k+1 z t E[f(ξ] (3.3 k t=0 f t + (k + 1µ 1 k t=0 z t Σ k+1,n n t=k+1 If j = k and (3.30 is satisfied, then the sharp lower and upper bounds for E[f(ξ] are the following: n t=k+1 z t (n kµ 1 Σ k+1,n z k+1 µ 1 z k+1 z k f k + µ 1 z k z k+1 z k f k+1 f t. E[f(ξ] (3.33 k t=0 f t + (k + 1µ 1 k t=0 z t Σ k+1,n where Σ i,j is given in (3.10. Case. m = In case of m =, problem in (3.6 is equivalent to the problem: n t=k+1 f t, j+1 t=k+1 f t min(max{(f 0 +... + f k v 0 + (f 1 +... + f k v 1 +... + f k v k + f k+1 v k+1 +... + (f k+1 +... + f n v n } (k + 1v 0 + kv 1 +... + v k + v k+1 + v k+ +... + (n kv n = 1 (z 0 +... + z k v 0 + (z 1 +... + z k v 1 +... + z k v k + z k+1 v k+1 +... + (z k+1 + z n v n = µ 1 (z 0 +... + z kv 0 + (z 1 +... + z kv 1 +... + z kv k + z k+1v k+1 +... + (z k+1 + z nv n = µ v 0 0, v 1 0,..., v n 0. (3.34

RRR 41-005 Page 15 The third order divided differences of f are positive. The bounds of E[f(ξ] are based on the knowledge of µ 1 and µ. Let A be the coefficient matrix of the equality constraints in (3.3. Since m + 1 is odd, any dual feasible bases in the minimization and in the maximization problems are of the form B min = (a 0, a i, a i+1 and B max = (a j, a j+1, a n, where 1 i n 1, 0 j n. The basis B min is also primal feasible if one of the following conditions is satisfied: Σ i,k k t=0 z t (k + 1µ Σ i,k (k + 1µ 1 k Σ k+1,i t=0 z t k t=0 z t (k + 1µ Σ k+1,i (k + 1µ 1 k γ 0,k 1 t=0 z t k t=0 z t Σ i+1,k Σ i+1,k, i + 1 k (3.35 Σ k+1,i+1 Σ k+1,i+1, i k + 1 (3.36 (k + 1µ γ 0,k 1 (k + 1µ 1 k t=0 z γ 0,k, i = k, (3.37 t γ 0,k where Σ i,j, Σ i,j, γ i,j, γi,j are defined in (3.10. The basis B max is also primal feasible if one of the following conditions is satisfied: α j,k α j,k α k+1,j α k+1,j σ k+1,n σ k+1,n (n kµ (n kµ 1 n n t=k+1 z t (n kµ (n kµ 1 n t=k+1 z t n t=k+1 z t (n kµ (n kµ 1 n t=k+1 z t n t=k+1 z t t=k+1 z t where σ i,j, σ i,j, α i,j, α i,j are defined in (3.10. We have the following sharp lower bound for E[f(ξ]: If i + 1 k, α j+1,k α j+1,k, j + 1 k (3.38 α k+1,j+1 α k+1,j+1, j k (3.39 σ k+,n σ k+,n, j = k, (3.40 1 k+1 k t=0 f t + Σ i+1,k [(k + 1µ 1 k t=0 z t] Σ i+1,k [(k + 1µ k t=0 z t ] Σ i,k Σ i+1,k Σ i+1,kσ i,k [ k f t + Σ i,k[(k + 1µ k t=0 z t ] Σ i,k [(k + 1µ 1 k t=0 z [ k t] Σ i,k Σ i+1,k Σ f i+1,kσ t i,k t=i+1 t=i k t=0 f t k + 1 k t=0 f t k + 1 ]. ]

Page 16 RRR 41-005 If i k + 1, (3.41 1 k+1 k t=0 f t + Σ k+1,i+1 [(k + 1µ 1 k t=0 z t] Σ k+1,i+1 [(k + 1µ k t=0 z t ] Σ k+1,i Σ k+1,i+1 Σ k+1,i Σ k+1,i+1 [ i t=k+1 f t (i k ] k t=0 f t k + 1 + Σ k+1,i[(k + 1µ k t=0 z t ] Σ k+1,i+1 [(k + 1µ 1 k t=0 z [ t] i+1 Σ k+1,i Σ k+1,i+1 Σ k+1,i Σ f t (i k + 1 ] k t=0 f t k+1,i+1 k + 1 t=k+1. (3.4 If i = k, 1 k+1 k t=0 f t + γ 0,k [(k + 1µ 1 k t=0 z t] γ 0,k [(k + 1µ k t=0 z t ] γ 0,k 1 γ 0,k γ 0,kγ 0,k 1 [ f k + γ 0,k 1[(k + 1µ k t=0 z t ] γ0,k 1 [(k + 1µ 1 k t=0 z [ t] γ 0,k 1 γ0,k γ f 0,kγ0,k 1 k+1 k t=0 f t k + 1 ] k t=0 f t k + 1 ]. The sharp upper bound for E[f(ξ] can be given as follows: (3.43 If j + 1 k, 1 n k n t=k+1 f t + α j+1,k [(n kµ 1 n t=k+1 z t] α j+1,k [(n kµ n t=k+1 z t ] α j,k α j+1,k α j+1,kα j,k [ k t=j f t (k j + 1 ] n t=k+1 n k α j,k [(n kµ n t=k+1 z t ] αj,k [(n kµ 1 n t=k+1 z [ k t] α j,k αj+1,k α j+1,kα f j,k t (k j ] n t=k+1 n k t=j+1 (3.44.

RRR 41-005 Page 17 If j k + 1, 1 n k n t=k+1 f t + α k+1,j+1 [(n kµ 1 n t=k+1 z t] α k+1,j+1 [(n kµ [ n j t=k+1 z t ] α k+1,j αk+1,j+1 α k+1,j+1α f k+1,j t (j k ] n t=k+1 n k t=k+1 + α k+1,j[(n kµ n t=k+1 z t ] αk+1,j [(n kµ 1 n t=k+1 z [ t] j+1 α k+1,j αk+1,j+1 α k+1,j+1α f k+1,j t (j k + 1 ] n t=k+1 n k t=k+1. If j = k, (3.45 1 n k n t=k+1 f t + σ k+,n [(n kµ 1 n t=k+1 z t] σ k+,n [(n kµ n [ t=k+1 z t ] σ k+1,n σk+,n σ k+1,n σ f k k+,n + σ k+1,n[(n kµ n t=k+1 z t ] σk+1,n [(n kµ 1 n t=k+1 z [ t] σ k+1,n σk+,n σ k+1,n σ f k+1 k+,n n t=k+1 f ] t n k n t=k+1 f ] t n k. (3.46 In all formulas given above, Σ i,j, Σ i,j, σ i,j, σ i,j, γ i,j, γ i,j, α i,j and α i,j are defined as in (3.10. 4 The Case of the Binomial Moment Problem In case of the binomial moment problem (1. we look at the special case, where z i = i, i = 0,..., n, f 0 = 0, f 1 =... = f n = 1. In case of m = we give the sharp lower and upper bounds for the probability that at least r = 1 out of n events occur. We look at the problem (1. but the constraints are supplemented by shape constraints of the unknown probability distribution p 0,..., p n. In the following three subsections we us the same shape constraints that we have used in Section 3.1-3.3.

Page 18 RRR 41-005 4.1 TYPE 1: p 0 p 1... p n Let m =. If we introduce the variables then problem (1. can be written as ( ( v + v 0 = p 0 p 1,..., v n 1 = p n 1 p n, v n = p n, min(max{v 1 + v +... + nv n } v 0 + v 1 + 3v + 4v 3 +... + (n + 1v n = 1 ( 3 v 1 + [( + ( 4 v + ( 3 Taking into account the equation: ( 3 1 + +... + the problem is the same as the following: ] v 3 +... + ( n + 1 v 3 +... + [( +... + v n = S 1 ( n ] v n = S v 0,..., v n 0. (4.1 ( k min(max = (k 1k(k + 1 6 n = iv i i=1 n (i + 1v i = 1 i=0 n (i + 1iv i = S 1 (4. i=0 n (i + 1i(i 1v i = 6S i=0 v 0,..., v n 0. Problem (4. is equivalent to the following: min(max{v 1 + v +... + nv n },

RRR 41-005 Page 19 v 0 + v 1 + 3v + 4v 3 +... + (n + 1v n = 1 v 1 + 6v + 1v 3 +... + (n + 1nv n = S 1 (4.3 6v + 4v 3 +... + (n + 1n(n 1v n = 6S v 0,..., v n 0. Let A be the coefficient matrix of equality constraints in (4.3. One can easily show that any 3 3 matrix taken from A has a positive determinant. In fact, any 3 3 determinant of the form i + 1 j + 1 k + 1 (i + 1i (j + 1j (k + 1k (i + 1i(i 1 (j + 1j(j 1 (k + 1k(k 1 ( f T is positive. Similarly, one can easily show that all minors of order 4 from are positive, A where f = (0, 1,,..., n T. By Theorem 1, the dual feasible bases of the minimization and maximization problems in (4.3 are in the form B min = (a 0, a i, a i+1 and B max = (a j, a j+1, a n, respectively, where 1 i n 1 and 0 j n. The basis B min is also primal feasible if i 1 3S S 1 i, ( i = 3S S 1. (4.4 The basis B max is also primal feasible if (n + js 1 n(j + 1 6S (n + j 1S 1 nj. (4.5 Since a both primal and dual feasible basis is optimal, it follows that the objective function values corresponding to these bases are sharp lower and upper bounds for the probability of ξ 1. Thus, we have the following sharp lower and upper bounds for P (ξ 1: (i + 1S 1 6S (i + 1(i + P (ξ 1 n n + 1 + (j + n + 1S 1 6S n(j + 1 (n + 1(j + 1(j +. (4.6 The bounds in (4.6 hold for any i, j, but they are sharp if (4.4 and (4.5 are satisfied. Remark. We consider the special case z i = i, i = 0,..., n.

Page 0 RRR 41-005 Since ( ν 1 = ν and ( ν = ν(ν 1 the first and the second binomial moments are equal to = ν ν, If we substitute S 1 = µ 1 and S = µ µ 1 µ 1 = S 1 and µ = S + S 1 in the closed bound formulas in (3.1 in Section 3, then we obtain the sharp bounds in (4.6.. 4. TYPE : p 0 p 1... p n Let us introduce the variables v 0 = p 0, v 1 = p 1 p 0,..., v n = p n p n 1. In this case problem (1. can be written as min(max{nv 0 + nv 1 + (n 1v +... + v n } ( n + 1 (v 0 + v 1 + (n + 1v 0 + nv 1 + (n 1v +... + v n = 1 [( ] [( n + 1 n + 1 1 v +... + ( n ] v n = S 1 [( +... + ( n ] [( 3 (v 0 + v 1 + v + +... + ( n ] ( n v 3 +... + v n = S Taking into account the equations: ( ( n + 1 i = v 0. (4.7 (n + i(n i + 1, i n and ( i +... + ( n = problem (4.7 can be written as (n + 1n(n 1 (i (i 1i, i n 6 min(max{nv 0 + nv 1 + (n 1v +.. + v n } (n + 1v 0 + nv 1 + (n 1v +... + v n = 1

RRR 41-005 Page 1 (n + 1n(v 0 + v 1 + (n + (n 1v +... + (n + i(n i + 1v i +... + nv n = S 1 (n+1n(n 1(v 0 +v 1 +v +...+[(n+1n(n 1 (i (i 1i]v i +...+3n(n 1v n = 6S v 0,..., v n 0. (4.8 We consider the following 4 4 matrix. n i + 1 n j + 1 n k + 1 n l + 1 B = n i + 1 n j + 1 n k + 1 n l + 1 (n + i(n i + 1 (n + j(n j + 1 (n + k(n k + 1 (n + l(n l + 1 n 3 n i 3 + 3i i n 3 n j 3 + 3j j n 3 n k 3 + 3k k n 3 n l 3 + 3l l for all 0 i < j < k < l n. Since detb = 0 we cannot use Theorem 1. Let A be the coefficient matrix of equality constraint in (4.8. By the use of Theorem, dual feasible bases in the minimization and the maximization problems in (4.8 are in the form (a 0, a 1, a n if n 1 1 B min = (a 0, a i, a i+1 and B max = (a 1, a j, a j+1 if n 1, (a 1, a s, a t, if n 1 respectively, where 1 i n 1, j, s n 1 and s + 1 t n. The basis B min is also primal feasible if i is determined by the following inequalities: (n + i 1S 1 6S in, (4.9 [(n i(n + i 1 + (i 1]S 1 6(n i + 1S n(i 1(n i + 1. (4.10 First, we notice that the basis B max = (a 0, a 1, a n is also primal feasible. The basis B max = (a 1, a j, a j+1 is also primal feasible if j is determined by the following inequalities: 6j(n js (n 3 n n j 3 + j + 1S 1 (n + 1[(1 j(n 1(n j 1 j(1 j ], (4.11 6j(n j+1s (n 3 n n j 3 +3j j+1s 1 (n+1[n 3 +n j 3 +4j 4j+ nj(n j+3]. (4.1 We also remark that if B max = (a 1, a j, a j+1 or B max = (a 1, a s, a t, then the objective function and the left hand side of the first constraint are the same and therefore the upper bound is equal to 1. Thus, we have the following lower and upper bounds for P (ξ 1. ( i + n + 3i + ni 1S 1 6(n i + 1S (n + 1i(i + 1(n i + 1 + n(i 1 (n + 1(i + 1(n i P (ξ 1 (4.13 min{1, (n 1S 1 6S }. n(n + 1 The bounds in (4.14 are sharp if i satisfies (4.9 and (4.10.

Page RRR 41-005 4.3 TYPE 3: p 0... p k... p n Now we assume that the distribution is unimodal with a known modus. If we introduce the variables v i, i = 0, 1,..., n: then problem (1. can be written as [( v 0 = p 0, v 1 =,..., v k = p k p k 1, v k+1 = p k+1 p k+,..., v n 1 = p n 1 p n, v n = p n, min(max{kv 0 + kv 1 + (k 1v +... + v k + v k+1 + v k+ +... + (n kv n } (k + 1v 0 + kv 1 + (k 1v +... + v k + v k+1 +... + (n kv n = 1 ( [( ( ] k + 1 k + 1 i (v 0 + v 1 + v i +... + kv k + (k + 1v k+1 [( ( ] [( ( ] k + 3 k + 1 n + 1 k + 1 + v k+ +... + v n = S 1 ( ] [( ( ] ( ( k 3 k k k + 1 +... + (v 0 + v 1 + v + + v 3 + v k + v k+1 + [( ( ] [( ( ] k + 1 k + k + 1 n + v k+1 +... + +... + v n = S This is equivalent to v 0. (4.14 min(max{kv 0 + kv 1 + (k 1v +... + v k + v k+1 + v k+ +... + (n kv n } (k + 1v 0 + kv 1 + (k 1v +... + v k + v k+1 +... + (n kv n = 1 (k+1kv 0 +(k+1kv 1 +...+(k+i(k i+1v i +...+kv k +(k+1v k+1 +...+(n k(n+k+1v n = S 1 (k + 1k(k 1(v 0 + v 1 + v +... + [(k 3 k (i (i 1i]v i +... + 3k(k 1v k +3(k + 1kv k+1 +... + (n k(n + nk + k 1v n = 6S v 0. (4.15 Let A be the coefficient matrix of equality constraints in (4.15. feasible basis for minimization problem in (4.15 is of the form B min = (a 0, a i, a i+1, By Theorem, a dual

RRR 41-005 Page 3 where 1 i n 1 and a dual feasible basis for maximization problem in (4.1 is of the form (a 0, a 1, a n if n 1 1 B max = (a 1, a j, a j+1 if n 1, (a 1, a s, a t, if n 1 where j, s n 1 and s + 1 t n. The basis B min = (a 0, a i, a i+1 is also primal feasible if one of the following conditions is satisfied: (k + i 1S 1 6S ik, [(k i(k + i 1 + (i 1]S 1 6(k i + 1S k(i 1(k i + 1 if i + 1 k, (4.16 (i + k 1S 1 ik 6S (i + ks 1 k(i + 1 if i k + 1, (4.17 4(k 1S 1 k(k 1 6S 4kS 1 k(k + 1 if i = k. (4.18 The basis B max = (a 1, a j, a j+1 is also primal feasible if one of the following conditions is satisfied: 6j(k js (k 3 k k j 3 + j + 1S 1 (k + 1[(1 j(k 1(k j 1 j(1 j ] 6j(k j+1s (k 3 k k j 3 +3j j+1s 1 (k+1[k 3 +k j 3 +4j 4j+ kj(k j+3] if j + 1 k, (4.19 (j k(j + ks 1 6(j ks (j + 1(j k k + 1 (j + k + 1S 1 6S j + if j k + 1, (4.0 (k 1S 1 k(k + 1 6S (k + 1S 1 (k + 1(k + if j = k. (4.1 The basis B max = (a 0, a 1, a n. We remark that B min = (a 0, a k, a k+1 and B max = (a 1, a k, a k+1 are also primal feasible since k is known. Using the same reasoning that we have used in Section 4., one can easily show that the upper bound is equal to 1 if B max = (a 1, a j, a j+1 or B max = (a 1, a s, a t. Three possible cases for the lower and upper bounds for P (ξ 1 can be given as follows:

Page 4 RRR 41-005 Case 1. i + 1 k ( i + k + 3i + ki 1S 1 6(k i + 1S (k + 1i(i + 1(k i + 1 + k(i 1 (k + 1(i + 1(k i P (ξ 1 Case. i k + 1 min{1, (n + ks 1 6S (n + 1(k + 1 }. (4. k k + 1 + k(i + k + 1S 1 6kS k (i + 1 (k + 1(i + 1(i + P (ξ 1 Case 3. i = k 6kS 1 6S + k 3 k k(k + 1(k + min{1, (n + ks 1 6S (n + 1(k + 1 }. (4.3 P (ξ 1 min{1, (n + ks 1 6S (n + 1(k + 1 }. (4.4 The above inequalities are sharp if the bases are primal feasible too, i.e., the inequalities (4.16,..., (4.1 are satisfied. 5 Examples We present four examples to show that if the shape of the distribution is given, then by the use of our bounding methodology, we can obtain tighter bounds for P (ξ 1. First, let us consider the following binomial moment problem where the shape information is not given. min(max{p 1 +... + p n } n i= n p i = 1 i=0 n ip i = S 1 (5.1 i=1 ( i p i = S

RRR 41-005 Page 5 p i 0, i = 0,..., n. Let A be the coefficient matrix of the equality constraints in (5.1. By Theorem, a dual feasible basis for the minimization problem in (5.1 is of the form: B min = (a 0, a i, a i+1, where 1 i n 1. Similarly, a dual feasible basis for the maximization problem in (5.1 is given as follows: (a 0, a 1, a n if n 1 1 B max = (a 1, a j, a j+1 if n 1, (a 1, a s, a t, if n 1 where j, s n 1 and s + 1 t n. The primal feasibility of the basis B min is ensured if the following condition is satisfied: i 1 S S 1 i. (5. First, we remark that the basis B max = (a 0, a 1, a n is primal feasible. We also note that the upper bound for P (ξ 1 is equal to 1 if B max = (a 0, a 1, a n or B max = (a 1, a s, a t. The basis B max = (a 1, a j, a j + 1 is also primal feasible if j is determined by the following condition: j S S 1 1 j + 1. (5.3 Similarly, the basis B max = (a 1, a s, a t is also primal feasible if s S S 1 1 t. (5.4 The sharp lower and upper bounds for P (ξ 1 can be given as follows: is 1 S i(i + 1 P (ξ 1 min{1, S 1 S n }, (5.5 where i satisfies (5.. Example 1. In order to create example for S 1, S we take the following probability distribution p 0 = 0.4, p 1 = 0.3, p = 0.5, p 3 = 0.03, p 4 = 0.0. With these probabilities the binomial moments are S 1 = 4 ip i = 0.97 and S = i=1 We have the following binomial moment problem: 4 i= ( i min(max{p 1 + p + p 3 + p 4 } p i = 0.46.

Page 6 RRR 41-005 p 0 + p 1 + p + p 3 + p 4 = 1 4 i= 4 ip i = 0.97 i=1 ( i p i = 0.46 p 0,..., p 4 0. (5.6 Let A be the coefficient matrix of the equality constraints in (5.6. By Theorem, we have B min = (a 0, a 1, a, B max = (a 0, a 1, a 4 for problem (5.1. By the use of (5.5, we obtain the following lower and upper bounds: 0.51 P (ξ 1 0.74. (5.7 Now, we assume that the probability distribution in (5.6 is decreasing, i.e., p 0... p 4. In this case problem (5.6 can be transformed to Type 1 problem in Section 4.1 as follows: min(max{v 1 + v + 3v 3 + 4v 4 } v 0 + v 1 + 3v + 4v 3 + 5v 4 = 1 v 1 + 3v + 6v 3 + 10v 4 = 0.97 v + 4v 3 + 10v4 = 0.46 v 0. (5.8 Now let A be the coefficient matrix of the equality constraints in (5.8. By using (4.4 and (4.5, the optimal bases for problem (5.8 are B min = (a 0, a, a 3 and B max = (a 1, a, a 4. The following are the lower and upper bounds obtained from (4.6: 0.5783 P (ξ 1 0.673. (5.9 One can easily see that these bounds are the optimum values of problem (5.6 together with the shape constraint p 0... p 4. Example. Let n = 5, S 1 = 3.95, S = 7. The corresponding binomial problem is min(max{p 1 + p + p 3 + p 4 + p 5 } p 0 + p 1 + p + p 3 + p 4 + p 5 = 1

RRR 41-005 Page 7 From (5.5 we obtain 5 ip i = 3.95 i=1 5 i= ( i p i = 7 p 0,..., p 5 0. (5.10 0.88 P (ξ 1 1. (5.11 Now we assume that the distribution is increasing. Problem (5.10 together with the shape constraint p 0... p 5 can be transformed to Type problem in Section 4. as follows: min(max{5v 0 + 5v 1 + 4v + 3v 3 + v 4 + v 5 } 6v 0 + 5v 1 + 4v + 3v 3 + v 4 + v 5 = 1 15v 0 + 15v 1 + 14v + 1v 3 + 9v 4 + 5v 5 = 3.95 0v 0 + 0v 1 + 0v + 19v 3 + 16v4 + 10v 5 = 7 v 0. (5.1 The optimal bases for (5.1 are B min = (a 0, a 4, a 5 and B max = (a 0, a 1, a 5. By the use of Problem (5.1 and the formulas given in (4.13, the sharp lower and upper bounds for P (ξ 1 can be found as follows: 0.94 P (ξ 1 0.97. (5.13 Example 3.Let S 1 = 8.393, S = 34.65. The corresponding binomial problem is 10 i= 10 min(max 10 i=1 10 i=0 ( i i=1 p i = 1 p i ip i = 8.393 p i = 34.65

Page 8 RRR 41-005 p 0,...p 10 0. (5.14 B min = (a 0, a 9, a 10 and B max = (a 1, a 9, a 10 are the optimal bases for (5.14 and from (5.5 we obtain 0.909 P (ξ 1 1. (5.15 By adding the shape constraint p 0... p 10, problem (5.14 can be transformed to Type problem in Section 4. as follows: min(max{10v 0 + 10v 1 + 9v + 8v 3 +... + v 10 } 10 i=0 10 i=0 10 i=0 (11 iv i = 1 (10 + i(11 iv i (990 (i (i 1iv i 6 = 8.393 = 34.65 v 0. (5.16 By the use of Problem (5.16 and the sharp bound formulas in (4.13, the sharp lower and upper bounds for P (ξ 1 can be given as follows: 0.975 P (ξ 1 1. (5.17 Let n = 5, S 1 = 3.95, S = 7. The corresponding binomial problem is min(max{p 1 + p + p 3 + p 4 + p 5 } p 0 + p 1 + p + p 3 + p 4 + p 5 = 1 From (5.5 we obtain 5 ip i = 3.95 i=1 5 i= ( i p i = 7 p 0,..., p 5 0. (5.18 0.88 P (ξ 1 1. (5.19

RRR 41-005 Page 9 Now we assume that the distribution is increasing. Problem (5.10 together with the shape constraint p 0... p 5 can be transformed to Type problem in Section 4. as follows: min(max{5v 0 + 5v 1 + 4v + 3v 3 + v 4 + v 5 } 6v 0 + 5v 1 + 4v + 3v 3 + v 4 + v 5 = 1 15v 0 + 15v 1 + 14v + 1v 3 + 9v 4 + 5v 5 = 3.95 0v 0 + 0v 1 + 0v + 19v 3 + 16v4 + 10v 5 = 7 v 0. (5.0 The optimal bases for (5.1 are B min = (a 0, a 4, a 5 and B max = (a 0, a 1, a 5. By the use of Problem (5.1 and the formulas given in (4.13, the sharp lower and upper bounds for P (ξ 1 can be found as follows: 0.94 P (ξ 1 0.97. (5.1 Example 4.Let n = 4, S 1 = 1.93, S = 1.7. The corresponding binomial problem is min(max 4 i= 4 i=1 4 p i = 1 i=0 4 ip i = 1.93 i=1 ( i p i p i = 1.7 By the use of (5.5 and Problem (5. we have p 0,...p 4 0. (5. 0.8633 P (ξ 1 1. (5.3 Now we assume that the distribution is unimodal and k =. By adding the shape constraint p 0 p 1 p p 3 p 4, problem (5. can be transformed to Type 3 problem in Section 4.3 as follows: min(max{v 0 + v 1 + v + v 3 + v 4 }

Page 30 RRR 41-005 3v 0 + v 1 + v + v 3 + v + 4 = 1 3v 0 + 3v 1 + v + 3v 3 + 7v 4 = 1.93 v 0 + v 1 + v + 3v 3 + 9v 4 = 1.7 v 0. (5.4 By the use of Problem (5.4 and the closed bound formulas in (4.4, the sharp lower and upper bounds for P (ξ 1 can be given as follows: 0.8975 P (ξ 1 1. (5.5 6 Applications In this section we present two examples for the application of our bounding technique, where shape information about the unknown probability distribution can be used. Example 1. Application in PERT. In PERT we frequently concerned with the problem to approximate the expectation or the values of the probability distribution of the length of the critical path. In the paper by Prékopa et al. (004 a bounding technique is presented for the c.d.f. of the critical, i.e., the largest path under moment information. In that paper first an enumeration algorithm finds those paths that are candidates to become critical. Then probability distribution of the path lengths are approximated by a multivariate normal distribution that serves a basis for the bounding procedure. In the present example we are concerned with only one path length but drop the normal approximation to the path length. Instead, we assume that the random length of each arc follows beta distribution, as it is usually assumed in PERT. Arc lengths are assumed to be independent, thus the probability distribution of the path length is the convolution of beta distribution with different parameters. The p.d.f. of the beta distribution in the interval (0, 1 is defined as f(x = where Γ(. is the gamma function, Γ(α + β Γ(αΓ(β xα 1 (1 x β 1, 0 < x < 1, (6.1 Γ(p = 0 x p 1 e x dx, p > 0. The kth moment of this distribution can easily be obtained by the use of the fact that 1 0 Γ(α + β Γ(αΓ(β xα 1 (1 x β 1 dx = 1.

RRR 41-005 Page 31 In fact, 1 0 x k f(xdx = Γ(α + β Γ(αΓ(β 1 0 x k+α 1 (1 x β 1 dx = Γ(α + β Γ(k + αγ(β Γ(αΓ(β Γ(k + α + β (6. = Γ(α + kγ(α + k 1...Γ(α + 1 Γ(α + β + kγ(α + β + k 1...Γ(α + β + 1 If α, β are integers, then using the relation: Γ(m = (m 1! the above expression takes a simple form. The beta distribution in PERT is defined over a more general interval (a, b and we define its p.d.f. as the p.d.f. of a + (b ax, where X has p.d.f. given by (6.1. In practical problems the values a, b, α, β are obtained by the use of expert estimations of the shortest largest and most probable times of accomplishing the job represented by the arc (see, e.g., Battersby, 1970. Let n be the number of arcs in a path and assume that each arc length ξ i has beta distribution with known parameters a i, b i, α i, β i, i = 1,..., n. Assume that α i 1, β i 1, i = 1,..., n. We are interested to approximate the values of the c.d.f. of the path length, i.e., ξ = ξ 1 +... + ξ n. The analytic form of the c.d.f. cannot be obtained in closed form but we know that the p.d.f. of ξ is unimodal. In fact, each ξ i has logconcave p.d.f. hence the sum ξ also has logconcave p.d.f. (for the proof of this assertion see, e.g., Prékopa 1995 and any logconcave function is also unimodal. In order to apply our bounding methodology we discretize the distribution of ξ, by subdividing the interval ( n i=1 a n i, i=1 b i and handle the corresponding discrete distribution as unknown, but unimodal such that some of its first m moments are also known. In principle any order moment of ξ is known but for practical calculation it is enough to use the first few moments, at least in many cases, to obtain good approximation to the values of the c.d.f. of ξ. The probability functions obtained by the discretizations, using equal length subintervals, are logconcave sequences. Convolution of logconcave sequences are also logconcave and any logconcave sequence is unimodal in the sense of Section 3.3. In order to apply our methodology we need to know the modus of the distribution of ξ. A heuristic method to obtain it is the following. We take the sum of the modi of the terms in ξ = ξ 1 +... + ξ n and the compute a few probability around it. Example. Application in Reliability. Let A 1,..., A n be independent events and define the random variables X 1,..., X n as the characteristic variables corresponding to the above events, respectively, i.e., X i = { 1 if Ai occurs 0 otherwise

Page 3 RRR 41-005 Let p i = P (X i = 1, i = 1,..., n. The random variables X 1,..., X n have logconcave discrete distributions on the nonnegative integers, consequently the distribution of X = X 1 +... + X n is also logconcave on the same set. In many applications it is an important problem to compute, or at least approximate, e.g., by the use of probability bounds the probability If I 1,..., I n k ( n 1,..., k X 1 +... + X n r, 0 r n. (6.3 designate the k-element subsets of the set {1,..., n} and J l = {1,..., n}\i l, l =, then we have the equation P (X 1 +... + X n r = n k n k=r l=1 i I l p i j J l (1 p j, 0 < r n (6.4 If n is large, then the calculation of the probabilities on the right hand side of (6.4 may be hard, even impossible. However, we can calculate lower and upper bounds for the probability on the left hand side of (6.4 by the use of the sums: S k = 1 i 1 <...<i k n n k p i1...p ik = l=1 i I l p i, k = 1,..., m, (6.5 where m may be much smaller than n. Since the random variable X 1 +...+X n has logconcave, hence unimodal distribution, we can impose the unimodality condition on the probability distribution: P (X 1 +... + X n = k, k = 0,..., n. (6.6 Then we solve both the minimization and maximization problems considered in Section 4. to obtain the bounds for the probability (6.3. If m is small the bounds can be obtained by formulas. Note that the largest probability (6.5 corresponds to k max = (n + 1 p 1 +... + p n n Note that a formula first obtained by C. Jordan (1867 provides us with the probability (6.3, in terms of the binomial moments S r,..., S n : n ( k 1 P (X 1 +... + X n r = ( 1 k r S r 1 k. (6.7 k=r.

RRR 41-005 Page 33 However, to compute the binomial moments involved may be extremely difficult, if not impossible. The advantage of our approach is that we use the first few binomial moments S 1,..., S m, where m is relatively small and we can obtain very good bounds, at least in many cases. References [1] Boros, E., A. Prékopa (1989. Closed form two-sided bounds for probabilities that exactly r and at least r out of n events occur. Math. Oper. Res. 14, 317 347. [] Fréchet, M. (1940/43. Les ProbabilitésAssociées a un Systéme d Événement Compatibles et Dépendants. Actualités Scientifique et Industrielles Nos.859, 94 Paris. [3] Galambos, J., I. Simonelli (1996. Bonferroni-type inequalities with applications. Springer, Wahrscheinlichkeits. [4] Kwerel, S.M. (1975. Most Stringent bounds on aggregated probabilities of partially specified dependent probability systems. J. Amer. Stat. Assoc. 70, 47 479. [5] Mandelbrot, B.B. (1977. The fractal geometry of nature. Freeman, New York. [6] Prékopa, A. (1988. Boole-Bonferroni inequalities and linear programming. Oper. Res. 36, 145 16. [7] Prékopa, A. (1989a. Totally positive linear programming problems. L.V. Kantorovich Memorial Volume, Oxford Univ. Press, New York. 197 07. [8] Prékopa, A. (1989b. Sharp bounds on probabilities using linear programming. Oper. Res. 38, 7 39. [9] Prékopa, A. (1990. The discrete moment problem and linear programming. Discrete Applied Mathematics 7, 35 54. [10] Prékopa, A.(1995. Stochastic Programming. Kluwer Academic Publishers, Dordtecht, Boston. [11] Prékopa, A. (1996. A brief introduction to linear programming. Math.Scientist 1, 85 111. [1] Prékopa, A. (001. Discrete higher order convex functions and their applications. In: Generalized Convexity and Monotonicity (N. Hadjisavvas, J.E. Martinez-Legaz, J-P. Penot, eds. Lecture Notes in Economics and Mathematical Systems, Springer, 1 47. [13] Prékopa, A. J. Long, T. Szántai (004. New bounds and approximations for the probability distribution of the length of critical path. In: Dynamic Stochastic Optimization. Lecture Notes in Economics and Mathematical Systems 53 (K. Marti, Yu. Ermoliev, G. Pflug eds., Springer, Berlin, Heidelberg, 93 30.