Studia Scientiarum Mathematicarum Hungarica 42 (2), (2005) Communicated by D. Miklós

Size: px
Start display at page:

Download "Studia Scientiarum Mathematicarum Hungarica 42 (2), (2005) Communicated by D. Miklós"

Transcription

1 Studia Scientiarum Mathematicarum Hungarica 4 (), 7 6 (5) A METHOD TO FIND THE BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM IF THE BASIS STRUCTURE IS GIVEN G MÁDI-NAGY Communicated by D Miklós Abstract The multivariate discrete moment problem (MDMP) is to find the minimum and/or maximum of the expected value of a function of a random vector which has a discrete finite support The probability distribution is unknown, but some of the moments are given The MDMP has been initiated by Prékopa who developed a linear programming methodology to solve it The central results in this respect concern the structure of the dual feasible bases These bases provide us with bounds without any numerical difficulty (which is arising in the usual solution methods) In this paper we shortly summarize the properties of the above mentioned basis structures, and then we show a new method which allows us to get the basis corresponding to the best bound out of the known structures by optimizing independently on each variable We illustrate the efficiency of this method by numerical examples Introduction The multivariate discrete moment problem (MDMP) has been introduced and discussed in papers by Prékopa [, 3, 4] and by Mádi-Nagy and Prékopa [] We present the main results of that field The problem is formulated in connection with a random vector (X,, X s ) in the following way We assume that the support of X j is a known finite set Z j = {z j,, z jnj }, consists of distinct elements, and we define p i i s = P (X = z i,, X s = z sis ), i j n j, j =,, s, µ α α s = n i = n s i s= z α i z αs si s p i i s, where α,, α s are nonnegative integers The number µ α α s will be called the (α,, α s )-order moment of the random vector (X,, X s ), and the sum α + + α s will be called the total order of the moment Mathematics Subject Classification Primary 9C5 Key words and phrases Stochastic programming, linear programming, applied probability Partially supported by OTKA grants F-4639 and T-4734 in Hungary 8 696/$ c 5 Akadémiai Kiadó, Budapest

2 8 G MÁDI-NAGY Let Z = Z Z s and f(z), z Z be a function for which we will require some assumptions Let f i i s = f(z i,, z sis ) One way to formulate the multivariate discrete moment problem is the following min(max) n i = n s i s= f i i s p i i s subject to () n i = n s i s= z α i z αs si s p i i s = µ α α s for α j, j =,, s; α + α s m and for α j =, j =,, k, k +,, s, m α k m k, k =,, s; p i i s, all i,, i s In problem () the unknown variables are the p i i s, all other quantities are known This means that in addition to all moments of total order at most m, the at most m th k order moments (m k m) of the k th univariate marginal distribution are also known, k =,, s The above problem serve for bounding () E [ f(x,, X s ) ] under the given moment information We will use the notations of the compact matrix form of problem () (compatible with the notation of Mádi-Nagy and Prékopa []): min(max) f T p subject to (3) Âp = b p Let V min (V max ) designate the minimum (maximum) value in problem () Let further B ( B ) designate a dual feasible basis (ie, a basis for which

3 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 9 the optimality condition is satisfied) for the minimization (maximization) problem Then, by linear programming theory, we know that (4) f Ṱ B p B V min E [ f(x,, X s ) ] V max f Ṱ B p B If B ( B ) is an optimal basis in the minimization (maximization) problem, then the first (last) inequality holds with equality We say that V min and V max are the sharp lower and upper bounds, respectively, for the expectation of f(x,, X s ) Consider the set of subscripts ( s (5) I = I j= I j ), where (6) I = { (i,, i s ) i j m, integers, j =,, s, i + + i s m } and (7) I j = { (i,, i s ) i j K j, i l = l j } K j = {k () j,, k ( K j ) j } {m, m +,, n j }, j =,, s Let us consider four different structures for K j : (8) K j even K j odd min u (j), u (j) +,, v (j), v (j) + m, u (j), u (j) +,, v (j), v (j) + max m, u (j), u (j) +,, v (j), v (j) +, n j u (j), u (j) +,, v (j), v (j) +, n j In order to make the paper self-contained we recall two theorems and two algorithms from earlier papers In what follows we will use the following Assumption The function f(z), z Z has nonnegative divided differences of total order m +, and in addition, in each variable z j it has nonnegative divided differences of order m j + := m + K j Theorem (Mádi-Nagy and Prékopa []) Let z j < z j < < z jnj, j =,, s Suppose that function f fulfils Assumption, where the set K j has one of the min structures in (8) Under these conditions the set of columns B of  in problem (3), with the subscript set I, is a dual feasible basis in the minimization problem (3), and (9) E [ f(x,, X s ) ] f Ṱ B p B

4 G MÁDI-NAGY If B is also a primal feasible basis in problem (3), then the inequality (9) is sharp In the next theorem we give both lower and upper bounds for the function f(z,, z s ), (z,, z s ) Z and the expectation E [ f(x,, X s ) ] Theorem (Mádi-Nagy and Prékopa []) Let z j > z j > > z jnj, j =,, s Suppose that function f fulfils Assumption, where K j has one of the structures in (8) that we specify below (a) If m + is even, K j is even and K j has the max structure in (8) or m + is even, K j is odd and K j has the min structure in (8), then the set of columns B in Â, corresponding to the subscripts I, is a dual feasible basis in the minimization problem (3) We also have the inequality () E [ f(x,, X s ) ] f Ṱ B p B (b) If m + is odd, K j is even and K j has the max structure in (8) or m + is odd, K j is odd and K j has the min structure in (8), then the basis B is dual feasible in the maximization problem (3) We also have the inequality () E [ f(x,, X s ) ] f Ṱ B p B The above two theorems yield dual feasible basis structures with the aid of the subscript set I defined in (5), ordering the elements of Z j s increasingly or decreasingly In the bivariate case (s = ), (still at Assumption ) we can give much more dual feasible bases corresponding to I, by suitable (not necessary increasing or decreasing) ordering of the variables In the following, we sketch these methods Detailed discussion with illustrative figures and examples can be found in Mádi-Nagy and Prékopa [] Consider first the case, where we want to construct lower bound We present an algorithm to find these sequences We may assume, without loss of generality, that the sets Z and Z are the following: Z = {,,, n }, Z = {,,, n } Min Algorithm (Mádi-Nagy and Prékopa []) Algorithm to find z,, z (m ) ; z,, z (m ) Step Initialize t =, q m, L = (,,, q ), U = ( n, n,, n (m q ) ) Let (z,, z (m ) ) = (arbitrary merger of the sets L, U) If U is even, then z =, l =, u = n, and if U is odd, then z = n, l =, u = n If t = m, then go to Step Otherwise go to Step

5 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM Step If z (m t) L, then let z (t+) = l t, l t+ = l t +, u t+ = u t, and if z (m t) U, then let z (t+) = u t, u t+ = u t, l t+ = l t Set t t + If t = m, then go to Step Otherwise repeat Step Step Stop, z,, z (m ) ; z,, z (m ) have been created Let,,, q, n,, n (m q ) be the numbers used to construct z, z,, z (m ) Then let (z jm, z j(m+),, z jnj ) = ( q j +, q j +,, n j (m q j ) ), j =, If m q j even, then K j should follow a minimum structure in (8), and if m q j odd, then K j should follow a maximum structure, j =, We have completed the construction of the dual feasible basis related to the subscript set I If we want to construct an upper bound, then only slight modification is needed in the above algorithm to find z,, z (m ) ; z,, z (m ) We only have to rewrite Step and keep the other steps unchanged, and then give the appropriate K j structures Max Algorithm (Mádi-Nagy and Prékopa []) Algorithm to find z,, z (m ) ; z,, z (m ) Step Initialize t =, q m, L = (,,, q ), U = ( n, n,, n (m q ) ) Let (z,, z (m ) ) = (arbitrary merger of the sets L, U) If U is odd, then z =, l =, u = n, and if U is even, then z = n, l =, u = n If t = m, then go to Step Otherwise go to Step, etc In case of the upper bound we have to choose K j the other way around as in case of the Min Algorithm If m q j even, then K j should follow a maximum structure, otherwise a minimum structure We have completed the construction of the dual feasible basis related to the subscript set I The new method In the previous section we showed dual feasible bases related to the index set I, where I was fixed, and we could find several I j, j =,, s, depending on which K j we have chosen among the related structures of (8) Considering the basic columns (which are assigned directly to the elements of Z, and just indirectly to the index set I), at Theorem and the fix I determines the elements of Z I, and the elements of Z Ij just depend on K j, j =,, s,

6 G MÁDI-NAGY in the bivariate Min and Max Algorithms the fixed I also determines the elements of Z I for a given z,, z (m ), and then the elements of Z Ij just depend on K j, j =, It means that in the above cases we can identify a basis by the K j sets j =,, s, unambiguously Below we present a method, by which we can find the K j sets independently, which correspond to the basis giving the best bound among the structures of Z I This method on one hand, clarifies the structure of the Z I - type bases, and on the other hand, gives an effective way to find the best bounds (and the related bases) yielded by the Min and Max Algorithms This second assertion is illustrated by numerical examples in the last section At first, let us define the investigated problem precisely: our aim is to find the set system K j, for which the inequality (9) (()) gives the best bound, ie, for which the objective function value is the highest in case of the minimum problem (lowest in case of the maximum problem): max(min) f T p subject to () Âp = b p the set of the Z I -type basic solutions Till now, in former works this problem was handled in the simplest way: the objective function value was calculated for all possible Z I -type bases and then the best bounds (and related bases) could be picked out This method seemed to be useful to give (not sharp, but usually close) bounds much faster than the dual method, without numerical difficulties, see Mádi-Nagy and Prékopa [] In some examples of this paper we could give suitable bounds in this manner even for cases, where the dual method did not yield useful bounds within acceptable time This phenomenon occured mostly because of the size of the problem In the following, we show how the closest Z I - type bounds can be found in much shorter way, which gives the possibility for bounding still greater sized problems At first, we convert the problem into a form, which fulfils some conditions We will see that the converted problem is equivalent with the original problem The conditions mentioned above are the following: (3) z j =, j =,, s and (4) f(z,, z s ) =

7 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 3 If we consider an arbitrary problem(), which fulfils Assumption, then the function (5) f converted (z) = f (z + z,, z s + z s ) f (z,, z s ) on the domain (6) Z converted = Z (z,, z s ), (a) satisfies conditions (3) and condition (4), (b) has the same convexity as the original f (ie fulfils Assumption ) on the domain Z converted = Z (z,, z s ) Consider the coefficient matrix Âconverted fitting Z converted This matrix can be obtained from the original  by a nonsingular transformation; ie, there exists a nonsingular matrix D such that  converted = D (and  = D  converted ) Let bconverted = D b Then the converted problem is min(max) f converted T p subject to (7)  converted p = p b converted The above means that (7) has the same feasible set and the same dual feasible set (because of assertion (b)) Moreover, taking into account that e T p = for each solution, the related objective function value of the converted problem is always lower by the constant f (z,, z s ) This indeed means the equivalence of the original and the converted problems Hence if we solve () for the converted problem, then we get the optimal solution of () for the original problem, as well In the following we present our new method for problems which satisfy conditions (3), (4) This does not mean any restriction because a general problem () can be converted into an equivalent one which fulfils the conditions as we have shown in the previous paragraph

8 4 G MÁDI-NAGY For rewriting the problem to an appropriate form, let us consider the following index sets: (8) I int = { (i,, i s ) i j m, integer, j =,, s, i + + i s m }, I axes j = { (i,, i s ) i j n j, integer; i l = for l j } Fig Illustration of the index sets (8), where n = n = 9, m = 5 The elements of I int are designated by, the elements of I axes and I axes are designated by

9 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 5 If we reorder the columns and rows of the constraint matrix of problem () according to the index sets above, we get a more perspicuous structure: s Z I int other columns b f T : f T I axes f T I axes s f T I int (9)  : µ z z n µ  Iaxes I axes I int z m zm n µ m z s z sns µ s  Iaxes s z ms s Is axes zms sn s I int µ ms  Iint I int p T : p p T I axes p Ṱ B : (p) B p T I axes s p T I int ( ) ( ) ( ) p T I axes B p T I axes s B p T I int B At problem (9) we have introduced some new notations, which help us in the following arguments The subscripts denote the columns of the matrix, while the superscripts refer to the rows Moreover (see later) the negative sign denotes the complement, ie, the columns and rows which are dropped out of the matrix p denotes an appropriate basic solution, while the p B vector consists of the components of the basis variables From the structure of the Z I bases follows that there is no basic variables in the last block of p T, hence these components have the value zero Between the rows p Ṱ and pt we referred by equality signs to the fact, B that all variables of p and p I int are also basic variables for each Z I -type basic solutions Considering that the last block of p has only zero values, we can drop out the related columns from (9):

10 6 G MÁDI-NAGY s Z I int b f T I axes f T I axes f T I axes s f T I int () µ = z z n µ  Iaxes µ I axes I axes I int z m zm n µ m z z n µ ÂIaxes I  Iaxes I int z m zm n µ m z s z sns µ ÂIaxes s  Iaxes s z ms s Is axes zms sn s I int µ I axes µ I axes s µ ms  Iint I int µ I int p p T I axes p T I axes p T I axes s p T I int Since the minor ÂIint is a square matrix, and at a basic solution we generate µ I int by the linear combination only of its columns, the vector p I I int are given by the formula below independently from the choices of sets K j : () p I int = (ÂIint I ) µ I int Considering that the value of p I int () in the following way: is constant, we can rewrite problem constant ( ) max(min) f T (,I axes,,i axes s ) p (,I axes,,i axes s ) + f T I intp I int subject to ()  (,I axes,,i axes s )p (,I axes,,i axes s ) = b A I intp I int = b

11 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 7 p (,I axes,,i s axes ) = the corresponding part of a Z I -type basic solution In matrix form: s b f T I axes f T I axes f T I axes s (3) µ T p I int z z n I µ I axes p I int I int z m zm n z z n ÂIaxes I z m zm n µ I axes z s z sns s z ms s Is axes zms sn s µ I axes s ÂIaxes p I int I int ÂIaxes s p I int I int µ I int ÂIint I p I int = p p T I axes p T I axes p T I axes s In (3) there are only zeros in the rows corresponding to I int, thus we can drop out them Then we get the forthcoming equivalent problem: max(min) f T (,I axes,,i axes s ) p (,I axes,,i axes s ) subject to (4)  ( Iint ) (,I axes,,i s axes ) p (,I axes,,i s axes ) = b ( Iint ) p (,I axes,,i s axes ) = the corresponding part of a Z I -type basic solution Since f(z,, z s ) =, considering assumption (4), furthermore only the first component of the first column of  is different from zero, it is more

12 8 G MÁDI-NAGY convenient sorting the constraints in the following way: s b( I int ) f T I axes f T I axes f T I axes s b (5) z z n ÂIaxes I I axes b z m zm n z z n ÂIaxes I I axes b z m zm n z s z sns s Is Is axes b z ms s zms sn s p p T I axes p T I axes p T I axes s On one hand, the value of the objective function is independent from the value p, on the other hand, p has nonzero coefficient only in the first constraint in (5) Considering these, we can optimize the problem by taking into account only the variables corresponding to I,, I s and then calculating the value p by solving the equation of the first constraint, ie, subject to max(min) f T (I axes,,i s axes ) p (I axes,,i s axes ) (6) Â ( Iint ) (I axes,,i s axes ) p (I axes,,i s axes ) = b ( Iint ) p (I axes,,i s axes ) = the corresponding part of a Z I -type basic solution, (7) p = ( b ) (,, ) T p (I axes,,i s axes )

13 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 9 Rewrite the objective function and the constraints of () in matrix form, skipping out the minors containing only zeros: s b( I int ) f T I axes f T I axes f T I axes s (8) z z n ÂIaxes I I axes b z m zm n z z n ÂIaxes I I axes b z m zm n z s z sns ÂIaxes s Is Is axes b z ms s zms sn s p T I axes p T I axes p T I axes s It can be seen that problem () splits up the following type of smaller subproblems: max(min) f T I j axesp Ij axes subject to (9) Â I j axes I j axesp Ij axes = b Iaxes j p Ij axes = the corresponding part of a Z I -type basic solution, j =,, s The last constraint above means that the subscript set of the basic variables of p Ij axes is the union of the part of I which contains the related axis and the set I j what is characterized by K j So far, we have explored the structure of problem () Taking the advantage of the fact that we can find the best basic solution component independently on each axes, we can write up the related new method The steps of the method are the following: (i) Convert the problem into the equivalent form, which satisfies (3) and (4) Then we start to solve this equivalent problem in the following way

14 G MÁDI-NAGY (ii) From equation () we know that p I int = (ÂIint I ) µ I int (iii) Using the definition in (): b = b ÂI intp I int Then we can write up problems (9) (iv) We solve problems (9) for j =,, s Let the optimal solutions be signed by p opt I j axes, j =,, s (v) From problem (): p opt = ( b ) (,, ) T p opt (I axes,,i s axes ) (vi) The other variables are zero, thus we have obtained the optimal solution of (): ( p opt ) T = ( ) p, (p opt (I axes,,i axes s )) T, (vii) This solution is also optimal solution of the original problem, just the related objective function value will be greater by f (z,, z s ) 3 Numerical Examples In this section we present the efficiency of the above method For the sake of simplicity we restrict ourselves to the bivariate case In the first part some examples of the paper of Mádi-Nagy and Prékopa [] are solved in several ways We give the best lower and upper bounds of the Min and Max Algorithms, and the related CPU times (assigned by CP U f in the table), when we simply calculate all of objective function values corresponding to the bases of the structures and then choose the best one among them (how we did it in former works) We also give the CPU s of the new method (CP U n ), applying it to the structures of the Min and Max Algorithms Comparing the running times we will see how faster and more effective the method of our paper is We also present the results of a numerically stable dual method to show that the (not sharp) bounds of the Min and Max Algorithms give a quite close results to the sharp bounds, within much shorter running time In the second part we show some examples for greater sized problems, where the dual method or the former method for the Min and Max Algorithm can not give results within a reasonable time Fortunately, the new method turns out to be fast enough to yield useable bounds We use programs written in Wolfram s Mathematica In connection with the Min and Max Algorithms on one hand we test all possible bases that we can obtain this way and choose the best ones (former method) and on the other hand we also find the best bounds by the new method In addition, we execute the dual algorithm starting from the bases ob-

15 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM tained by the use of Theorem (Theorem ), where K j = {m,, m j } (K j = {n j m j,, n j m}), j =, We use Bland s rule to avoid cycling Example The problem is taken from Prékopa, Vizvári and Regős [5] We have 4 events subdivided into two -element groups; X j equals the number of events that occur in the jth group, j =,, Z Z = {,, } {,, } We want to find bounds for the probability that at least one out of the 4 events occurs, ie P (X + X ) = E [ f(x, X ) ], where, if (z, z ) = (, ) (3) f(z, z ) =, otherwise Prékopa [3] has shown, that if m + is even (odd), then all divided differences (3) of total order m + are nonpositive (nonnegative) Consider the cross binomial moments of order (α,, α s ) (α,, α s are nonnegative integers): (3) S α α s = E [( X α ) ( Xs α s )] We can formulate the related binomial moment problem as an LP problem very similarly, as in case of the power moment problem It can be proved that, for a given objective function and support set, the power moment problem and the binomial moment problem can be transformed into each other by simple nonsingular transformations This means if we consider the matrices of the equality constraints there exists a nonsingular square matrix by which and by its inverse the two coefficient matrices can be transformed into the each other This implies that our new method also can be applied to binomial moment problems, cf Mádi-Nagy and Prékopa []

16 G MÁDI-NAGY Suppose that we know the following cross binomial moments: st nd group group That means that m = 4, m = m = 6 Below we present the results In the Lower (Upper) Bounds columns there are the best bounds of the Min (Max) Algorithm, while in the Dual Min/Max columns there are the sharp bounds obtained by the dual algorithm CPU times have been appended everywhere (in case of the Min and Max Algorithm for the former and for the new method, respectively) Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU From the table above we can see the main tendencies: the dual method has yielded the sharp results but the bounds of the Min and Max Algorithms also have given a good approximation and we could find them within a much shorter time, especially by the use of our new method Example The following problem is taken from Mádi-Nagy and Prékopa [] Consider the function f(z, z ) = e z /+z /+z z /, the support set Z Z = {,, } {,, } and the same binomial moments as in Example We have obtained the following results In case of m = m = 6, m = 3: Lower CP U f CP U n Upper CP U f CP U n

17 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 3 In case of m = m = 6, m = 4: Dual Min CPU Dual Max CPU Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU The initial basis corresponding to the numerical value supplemented by the sign was obtained by the Max Algorithm, where (z,, z ) = (, 9, 8), K = {4, 5, 6, 7}, K = {, 6, 7, 8} In case of the next two examples we were enable to carry out the dual method by the use of CPLEX It always reported infeasibility of the primal problem, even though the moments that have used in the problem allow for feasibility, by construction Generally, our experiences show that the most popular LP packages quite often fail to solve discrete moment problems because of numerical instability This stresses the importance of using numerically very stable algorithms Example 3 This example is also taken from Mádi-Nagy and Prékopa [] Let Z = Z = {,, 4}, m = 6, m = m = 4, and generate the moments by the unique discrete distribution on Z We obtain: µ ij / / / / /3 4/3 49/ / /3 First, consider the bivariate utility function (3) f(z, z ) = log [ ( e αz +a )( e βz +b ) ], defined for where α, β are positive constants e αz +a >, e βz +b >,

18 4 G MÁDI-NAGY It is easy to see that f z j >, f z j <, 3 f z 3 j >, 4 f z 4 j <, 5 f z 5 j >,, j =,, f z z <, 3 f z z 3 f >, z z >, etc All even (odd) total order derivatives of the function are negative (positive) If we restrict the definition of the function to Z = Z Z, then it satisfies the conditions of Theorem and Assume that α = β = a = b = We get the following results: Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU Secondly, consider the function (3) Then the bounds are the following Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU The sign means that the dual algorithm does not solve the problem in acceptable time Finally, consider the function We have obtained the results: f(z, z ) = e z /5+z z /4+z /5 Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU

19 BEST BOUNDS IN A MULTIVARIATE DISCRETE MOMENT PROBLEM 5 Example 4 The last problem we recalculate from Mádi-Nagy and Prékopa [] is the following Let Z = Z = {,, 9}, m = 6, m = m = 4 Consider the X, Y, Y random variables having Poisson distributions with λ parameters 3, 4, 5, respectively We generate the moments of the random vector ( min (X + Y, 9), min (X + Y, 9) ) and obtain µ ij µ 5 = , µ 6 = 66746, µ 5 = , µ 6 = Considering the bivariate utility function (3) with α = 75, β = 5, a =, b = 3, we get the following results: Lower CP U f CP U n Upper CP U f CP U n Dual Min CPU Dual Max CPU Considering the running times above, we can conclude, that for greater sized problems the numerically very stable dual method can not give result within a reasonable time However we can give approximations based on the Min and Max Algorithms by the application of our new method The following example will be the numerical representation of that, ie we will bound greater sized problems by the aid of the method of this paper Example 5 Let Z = Z = {,, }, m = 6, m = m = 4 First, generate the moments by the unique discrete distribution on Z We also consider the moments of the random vector ( min (X + Y, ), min (X + Y, ) ), where X, Y, Y are random variables having Poisson distributions with λ parameters 3, 4, 5, respectively The results of the new method, corresponding to the above moments at case of some functions, are shown below

20 6 G MÁDI-NAGY: A METHOD TO FIND THE BEST BOUNDS log [ ( e 75z + )( e 5z +3 ) ]: Moments Lower CP U n Upper CP U n Unique Poisson e z /+z z /5+z /4 : Moments Lower CP U n Upper CP U n Unique Poisson REFERENCES [] Mádi-Nagy, G and Prékopa, A, On Multivariate Discrete Moment Problems and Their Applications to Bounding Expectations and Probabilities, Mathematics of Operations Research, Vol 9, No (4), 9 58 MR 5e:9 [] Prékopa, A, Inequalities on Expectations Based on the Knowledge of Multivariate Moments, in: Shaked, M, Tong, Y L (Eds), Stochastic Inequalities, Institute of Mathematical Statistics, Lecture Notes Monograph Series, Vol (99), MR 94h:64 [3] Prékopa, A, Bounds on Probabilities and Expectations Using Multivariate Moments of Discrete Distributions, Studia Scientiarum Mathematicarum Hungarica 34 (998), MR k:68 [4] Prékopa, A, On Multivariate Discrete Higher Order Convex Functions and their Applications, RUTCOR Research Report (), 39 [5] Prékopa, A, Vizvári, B and Regős, G, A Method of Disaggregation for Bounding Probabilities of Boolean Functions of Events, RUTCOR Research Report (997), 97 (Received October 9, 3) DEPARTMENT OF DIFFERENTIAL EQUATIONS BUDAPEST UNIVERSITY OF TECHNOLOGY AND ECONOMICS MŰEGYETEM RAKPART 3 BUDAPEST H- HUNGARY gnagy@mathbmehu

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a R u t c o r Research R e p o r t Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case Gergely Mádi-Nagy a RRR 9-28, April 28 RUTCOR Rutgers Center for Operations

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 19, No. 4, pp. 1781 1806 c 2009 Society for Industrial and Applied Mathematics ON MULTIVARIATE DISCRETE MOMENT PROBLEMS: GENERALIZATION OF THE BIVARIATE MIN ALGORITHM FOR HIGHER DIMENSIONS

More information

R u t c o r Research R e p o r t. Empirical Analysis of Polynomial Bases on the Numerical Solution of the Multivariate Discrete Moment Problem

R u t c o r Research R e p o r t. Empirical Analysis of Polynomial Bases on the Numerical Solution of the Multivariate Discrete Moment Problem R u t c o r Research R e p o r t Empirical Analysis of Polynomial Bases on the Numerical Solution of the Multivariate Discrete Moment Problem Gergely Mádi-Nagy a RRR 8-2010, April, 2010 RUTCOR Rutgers

More information

Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities

Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities R u t c o r Research R e p o r t Method of Multivariate Lagrange Interpolation for Generating Bivariate Bonferroni-Type Inequalities Gergely Mádi-Nagy a András Prékopa b RRR 10-2009, June 2009 RUTCOR Rutgers

More information

A Method of Disaggregation for. Bounding Probabilities of. Boolean Functions of Events

A Method of Disaggregation for. Bounding Probabilities of. Boolean Functions of Events R utcor Research R eport A Method of Disaggregation for Bounding Probabilities of Boolean Functions of Events Andras Prekopa a Bela Vizvari b Gabor Reg}os c RRR 1-97, January 1998 RUTCOR Rutgers Center

More information

Maximization of a Strongly Unimodal Multivariate Discrete Distribution

Maximization of a Strongly Unimodal Multivariate Discrete Distribution R u t c o r Research R e p o r t Maximization of a Strongly Unimodal Multivariate Discrete Distribution Mine Subasi a Ersoy Subasi b András Prékopa c RRR 12-2009, July 2009 RUTCOR Rutgers Center for Operations

More information

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION

SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION R u t c o r Research R e p o r t SHARP BOUNDS FOR PROBABILITIES WITH GIVEN SHAPE INFORMATION Ersoy Subasi a Mine Subasi b András Prékopa c RRR 4-006, MARCH, 006 RUTCOR Rutgers Center for Operations Research

More information

Discrete Moment Problem with the Given Shape of the Distribution

Discrete Moment Problem with the Given Shape of the Distribution R u t c o r Research R e p o r t Discrete Moment Problem with the Given Shape of the Distribution Ersoy Subasi a Mine Subasi b András Prékopa c RRR 41-005, DECEMBER, 005 RUTCOR Rutgers Center for Operations

More information

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Game Theory Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Bimatrix Games We are given two real m n matrices A = (a ij ), B = (b ij

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

More information

Probabilities of Boolean. Functions of Events

Probabilities of Boolean. Functions of Events R utcor Research R eport Lower and Upper Bounds on Probabilities of Boolean Functions of Events Andras Prekopa a Bela Vizvari b Gabor Reg}os c RRR 36-95, September 1995. Revised May 1996 RUTCOR Rutgers

More information

Lecture 5. x 1,x 2,x 3 0 (1)

Lecture 5. x 1,x 2,x 3 0 (1) Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept

More information

CO350 Linear Programming Chapter 6: The Simplex Method

CO350 Linear Programming Chapter 6: The Simplex Method CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005 Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization

More information

ORIE 6300 Mathematical Programming I August 25, Lecture 2

ORIE 6300 Mathematical Programming I August 25, Lecture 2 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Damek Davis Lecture 2 Scribe: Johan Bjorck Last time, we considered the dual of linear programs in our basic form: max(c T x : Ax b). We also

More information

R u t c o r Research R e p o r t. The Optimization of the Move of Robot Arm by Benders Decomposition. Zsolt Robotka a RRR , DECEMBER 2005

R u t c o r Research R e p o r t. The Optimization of the Move of Robot Arm by Benders Decomposition. Zsolt Robotka a RRR , DECEMBER 2005 R u t c o r Research R e p o r t The Optimization of the Move of Robot Arm by Benders Decomposition Zsolt Robotka a Béla Vizvári b RRR 43-2005, DECEMBER 2005 RUTCOR Rutgers Center for Operations Research

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities

More information

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form

More information

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) Tim Roughgarden February 2, 2016 1 Recap This is our third lecture on linear programming, and the second on linear programming

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe) 19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION

CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION (Revised) October 12, 2004 CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION Linear programming is the minimization (maximization) of a linear objective, say c1x 1 + c2x 2 +

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Solution of Probabilistic Constrained Stochastic Programming Problems with Poisson, Binomial and Geometric Random Variables

Solution of Probabilistic Constrained Stochastic Programming Problems with Poisson, Binomial and Geometric Random Variables R u t c o r Research R e p o r t Solution of Probabilistic Constrained Stochastic Programming Problems with Poisson, Binomial and Geometric Random Variables Tongyin Liu a András Prékopa b RRR 29-2005,

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1 University of California, Davis Department of Agricultural and Resource Economics ARE 5 Optimization with Economic Applications Lecture Notes Quirino Paris The Pivot Method for Solving Systems of Equations...................................

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b R utcor Research R eport Bounding in Multi-Stage Stochastic Programming Problems Olga Fiedler a Andras Prekopa b RRR 24-95, June 1995 RUTCOR Rutgers Center for Operations Research Rutgers University P.O.

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10 MVE165/MMG631 Linear and Integer Optimization with Applications Lecture 4 Linear programming: degeneracy; unbounded solution; infeasibility; starting solutions Ann-Brith Strömberg 2017 03 28 Lecture 4

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Cowles Foundation for Research in Economics at Yale University

Cowles Foundation for Research in Economics at Yale University Cowles Foundation for Research in Economics at Yale University Cowles Foundation Discussion Paper No. 1904 Afriat from MaxMin John D. Geanakoplos August 2013 An author index to the working papers in the

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes

Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes Michael M. Sørensen July 2016 Abstract Path-block-cycle inequalities are valid, and sometimes facet-defining,

More information

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX Math. Operationsforsch. u. Statist. 5 974, Heft 2. pp. 09 6. PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX András Prékopa Technological University of Budapest and Computer

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2) Algorithms and Theory of Computation Lecture 13: Linear Programming (2) Xiaohui Bei MAS 714 September 25, 2018 Nanyang Technological University MAS 714 September 25, 2018 1 / 15 LP Duality Primal problem

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Introduction to linear programming

Introduction to linear programming Chapter 2 Introduction to linear programming 2.1 Single-objective optimization problem We study problems of the following form: Given a set S and a function f : S R, find, if possible, an element x S that

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1) CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1) Tim Roughgarden January 28, 2016 1 Warm-Up This lecture begins our discussion of linear programming duality, which is

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information