INEXACT CUTS IN BENDERS' DECOMPOSITION GOLBON ZAKERI, ANDREW B. PHILPOTT AND DAVID M. RYAN y Abstract. Benders' decomposition is a well-known techniqu

Size: px
Start display at page:

Download "INEXACT CUTS IN BENDERS' DECOMPOSITION GOLBON ZAKERI, ANDREW B. PHILPOTT AND DAVID M. RYAN y Abstract. Benders' decomposition is a well-known techniqu"

Transcription

1 INEXACT CUTS IN BENDERS' DECOMPOSITION GOLBON ZAKERI, ANDREW B. PHILPOTT AND DAVID M. RYAN y Abstract. Benders' decomposition is a well-known technique for solving large linear programs with a special structure. In particular it is a popular technique for solving multi-stage stochastic linear programming problems. Early termination in the subproblems generated during Benders' decomposition (assuming dual feasibility) produces valid cuts which are inexact in the sense that they are not as constraining as cuts derived from an exact solution. We describe an inexact cut algorithm, prove its convergence under easily veriable assumptions, and discuss a corresponding Dantzig-Wolfe decomposition algorithm. The paper is concluded with some computational results from applying the algorithm to a class of stochastic programming problems which arise in hydroelectric scheduling. Key words. stochastic programming, Benders' decomposition, inexact cuts AMS subject classications. 90C15, 90C05, 90C06, 90C90 1. Introduction. Many large linear programming problems have a block diagonal structure which makes them amenable to decomposition techniques such as Dantzig-Wolfe decomposition ([4], [5]), or its dual, Benders' decomposition [2]. The latter technique has become increasingly popular in stochastic linear programming starting with the independent publication of the L-Shaped method by VanSlyke and Wets [15] for two-stage stochastic linear programming. (The L-shaped method is often referred to as stochastic Benders' decomposition). In this paper we shall be concerned with Benders' decomposition applied to linear programs of the form: P: minimize c T x + q T y subject to Ax = b; T x + W y = h; If we dene x 0; y 0: then P can be written: P: minimize c T x + Q(x) subject to Ax = b; Q(x) = minfq T y j W y = h? T x; y 0g; x 0: Using this notation we can now write down the following algorithm. This research has been supported by the New Zealand Public Good Science Fund, FRST contract 403. y Operations Research Group, Department of Engineering Science, University of Auckland, Private Bag 92019, Auckland, New Zealand (g.zakeri@auckland.ac.nz). 1

2 2 ZAKERI, PHILPOTT, RYAN Benders' Decomposition Algorithm Set i := 0, U 0 := 1, L 0 :=?1, F := IR n fl 0 g. While U i? L i > 0 1. Set i := i Solve the master problem MP: minimize c T x + subject to Ax = b; (x; ) 2 F; x 0; to give optimal primal variables (x i ; i ). 3. Set L i := c > x i + i. 4. Solve the subproblem SP(x i ): minimize q T y subject to W y = h? T x i ; y 0; to give optimal primal variables y i and dual variables i. 5. Set U i := minfu i?1 ; c > x i + q > y i g. 6. F := F \ f(x; ) j i > (h? T x) g. In the classical case the cut dened by Step 6 comes from an optimal basic feasible solution to the subproblem. Since there are a nite number of basis matrices for this problem, nite termination of the algorithm at the optimal solution can be guaranteed (see e.g. [11]). In this paper we explore the Benders' decomposition algorithm in the case where the cuts are not computed from an optimal extreme-point solution to a linear programming subproblem. For example, when the subproblems are very large, it makes sense to determine the cuts by applying a primal-dual interior-point method to the subproblem. Terminating this procedure when it yields a feasible dual solution will still dene a valid cut. We call this an inexact cut. If the dual solution is close to optimal then an inexact cut will also separate the optimal solution from the current iterate (except when this is optimal). As observed by a number of authors (see e.g. [1]), inexact cuts may be less eort to compute than the exact cuts, especially for linear programming algorithms which yield an approximately optimal dual feasible solution before termination. In theoretical terms, Benders' decomposition is a special case of a more general class of convex cutting plane algorithms rst introduced by Kelley [12]. Cutting plane algorithms construct a sequence of hyperplanes which separate the current iterate from the optimal solution. In the case where the cutting planes are computed inexactly, the asymptotic convergence of this process to the optimal solution has been investigated by a number of authors [6], [7], [9], [12]. In the context of Benders' decomposition applied to linear programs of the form P, all of the convergence results in these papers assume that the sets containing x and > T are both bounded. In the convergence theorem we prove for inexact cuts we require that X = fx 0jAx = bg is bounded

3 INEXACT CUTS IN BENDERS' DECOMPOSITION 3 and that X domq. The latter assumption, which is known as relatively complete recourse in stochastic programming, is weaker than requiring that > T is bounded. In the next section we describe a Benders' decomposition algorithm which terminates the solution of the subproblem before optimality to produce an inexact cut. The steps of the algorithm ensure that this cut separates the optimal solution from the current iterate. In section 3 we consider the convergence of the inexact cut algorithm under the above assumptions, and in section 4 we discuss the implications of our results for Dantzig-Wolfe decomposition. In section 5 we give some computational results. 2. The Algorithm. We start the algorithm by choosing a convergence tolerance, setting an iteration counter i := 0, and choosing some decreasing sequence f i g which converges to 0. We also set U 0 := 1, and L 0 :=?1. The remaining steps of the algorithm are as follows. Inexact Cut Algorithm While U i? L i > 1. Set i := i Solve MP to obtain (x i ; i ). 3. Set L i := c > x i + i. 4. Perform an inexact optimization to generate a vector i feasible for the dual of SP(x i ) such that (1) > i (h? T x i ) + i > Q(x i ): 5. Set U i := minfu i?1 ; c > x i + i > (h? T x i) + i g. 6. If i > (h? T x i) > i then add the cut i > (h? T x) to MP else set i := i + 1, x i+1 := x i, i+1 := i, L i+1 := L i, U i+1 := U i and go to Step 4 1. We denote by v i the value of the inexact optimization in step 4. Thus v i = > i (h?t x i). In step 6 of each iteration of this method we check to see if v i > i, which ensures that the hyperplane > i (h? T x) = will strictly separate the current iterate (x i; i ) from any optimal solution of P. If this check fails then we decrease the duality gap tolerance and continue with the solution of SP(x i ), until either i! 0 with no change in (x i ; i ), or (x i ; i ) is separated from an optimal solution of P by a cut. In order to show that this algorithm converges we make use of the following simple results. Lemma 2.1.? > i T is an i-subgradient of Q at x i. Proof. Since i is dual feasible for SP(x), for every x and by (1) Thus Q(x) > i (h? T x); Q(x i ) v i + i ; Q(x)? Q(x i ) > i (h? T x)? > i (h? T x i )? i ; 1 Note that in this case x and remain xed and only (possibly) changes

4 4 ZAKERI, PHILPOTT, RYAN giving Q(x) Q(x i )? i > T (x? x i)? i ; which gives the result. Lemma 2.2. Let U i, L i, x i and i be generated by applying the inexact cut algorithm with i. Then 0 U i? L i v i + i? i : Proof. Since U i is an upper bound on the value of P and L i a lower bound U i? L i 0. Moreover, since U i c > x i + v i + i ; and L i = c > x i + i, we have 0 U i? L i c > x i + v i + i? c > x i? i : Hence 0 U i? L i v i + i? i : 3. Convergence of the Algorithm. In this section we prove that the sequence f(x i ; i )g generated by the inexact cut algorithm converges to an optimal solution to P. As alluded to above, abstract proofs of convergence for cutting plane methods (see [12] ) typically invoke a compactness argument, which in our context relies on an assumption that the sets containing x and > T are both bounded. Since this might not always be the case for P, it is instructive to prove a convergence result directly to see to what extent these boundedness assumptions might be relaxed. We begin by showing that the sequence fi T T g generated by the inexact cut algorithm is bounded provided that the set X = fx 0jAx = bg is bounded, and domq is IR n. (In stochastic programming the latter is known as complete recourse.) We make use of the following technical result. Lemma 3.1. If for some pair (b; ) the epigraph of f(x) = max 1kN fb> k x + kg lies in the half space H = f(x; )j b > x + g then jjbjj max 1kN jjb k jj. Proof. Suppose jjbjj > max 1kN jjb k jj = M and let ~ = max 1kN j k j. Let j?j n > ~ jjbjj 2?Mjjbjj and dene z := nb. We will show that f(z) < b> z + contradicting the hypothesis. Formally: b > z + = njjbjj 2 + > njjbjjm + + j ~? j nmjjbjj + ~ max 1kN [(nb> )b k ] + ~ = max 1kN [(nb> )b k ] + max 1kN j kj max 1kN [(nb> )b k + k ] = f(z):

5 INEXACT CUTS IN BENDERS' DECOMPOSITION 5 This contradicts the assumption that the epigraph of max 1kN fb > k x + kg lies in H which implies that jjbjj M thereby completing the proof. Lemma 3.2. If domq = IR n then the sequence f? i > T g is bounded. Proof. Let f^ k j 1 k Ng be the set of basic feasible solutions of W > q. Since i is dual feasible for SP(x) for every x we have > i (h? T x) max 1kN ^> k (h? T x) = Q(x); where the equation follows by virtue of domq = IR n. Therefore the epigraph of Q lies in the half space: H = f(x; )j b > x + g where b =? i > T and = > i h. The conclusion is then immediate by applying Lemma 3.1. Next we will show that the inexact cut algorithm terminates in a nite number of iterations with a?optimal solution. If the inexact cut algorithm does not terminate in a nite number of iterations, then it will produce an innite sequence f(x i ; i )g, which satises one of the following conditions: 1. There exists m such that for all i m, i v i : 2. There exists a subsequence f(x (i) ; (i) )g such that (i) < v (i). Lemma 3.3. If there exists m such that for all i m; i v i then U i? L i # 0. Proof. Since i v i, Lemma 2.2 implies 0 U i? L i v i + i? i i : The result follows since i! 0. Lemma 3.4. If there exists a convergent subsequence f(x (i) ; (i) )g such that (i) < v (i) then 1. 0 < v (i)? (i) v (i)? v (i?1) + > T (x (i?1) (i)? x (i?1) ); 2. lim v (i)? v (i?1) = 0; 3. lim inf > T (x (i?1) (i)? x (i?1) ) 0: Proof. It is clear that 0 < v (i)? (i) from the assumption. To obtain the second inequality, observe that (x (i) ; (i) ) is constrained to satisfy the cut we added at iteration (i? 1). Therefore which implies (i) > (i?1) (h? T x (i)); (2) v (i)? (i) v (i)? > (i?1) (h? T x (i)) = v (i)? v (i?1) + > (i?1) T (x (i)? x (i?1) ): Now (x (i) ; (i) )! (x ; ) by assumption. Furthermore, from the algorithm, we have therefore Q(x (i) )? (i) v (i) Q(x (i) ); (3) v (i)! Q(x );

6 6 ZAKERI, PHILPOTT, RYAN which implies lim v (i)? v (i?1) = 0: Furthermore, (2) and (3) imply lim inf > (i?1) T (x (i)? x (i?1) ) 0: Lemma 3.5. Suppose X = fx 0jAx = bg is bounded and domq is IR n. If there exists a subsequence f(x (i) ; (i) )g such that (i) < v (i) then U i? L i # 0. Proof. The subsequence f(x (i) ; (i) )g is bounded since X is bounded. Thus we may assume, by extracting a further subsequence if necessary, that f(x (i) ; (i) )g is convergent to f(x ; )g, say. We proceed to show that U (i)?l (i) converges to zero, which implies the result. By Lemma 2.2 we have that so if we let then by Lemma U (i)? L (i) v (i) + (i)? (i) ; V (i) = v (i) + (i)? (i) ; (4) 0 < V (i) v (i)? v (i?1) + > (i?1) T (x (i)? x (i?1) ) + (i) ; and (5) lim v (i)? v (i?1) = 0: Furthermore, since domq = IR n ; by Lemma 3.2, > T is bounded, and so (i?1) > (i?1) T (x (i)? x (i?1) )! 0: Substituting into (4) and taking the limit as (i)! 1 yields V (i)! 0. Since V (i) is an upper bound on U (i)?l (i) and this is bounded below by 0 then it converges to 0. Now by their denitions, fu i g is decreasing and fl i g is increasing. Hence fu i? L i g is decreasing and since a subsequence of this sequence converges, it follows that the whole sequence converges, which completes the proof. Theorem 3.6. If fx 0 : Ax = bg is bounded and domq = IR n then the inexact cut algorithm terminates in a nite number of iterations with a?optimal solution of P. Proof. From Lemma 3.3 and Lemma 3.5 we have that U i? L i # 0. Therefore there exists some I such that U I? L I <, so the algorithm terminates in at most I iterations. Let x k be such that U I = c > x k + v k + k. Then c > x k + Q(x k ) c > x k + v k + k < L I + ; and so c > x k + Q(x k ) is within of the optimum. We shall now consider relaxing the assumption that domq = IR n to X domq. (We retain the assumption that X is bounded.) In this case we are no longer guaranteed that f? > i T g is a bounded sequence, since fx ig could lie on the boundary of the domain of Q. At such points it is possible to have unbounded i -subgradients. Since Lemma 3.4 remains valid without our assumption, in what follows we conne attention to the term > (i?1) T (x (i)? x (i?1) )

7 INEXACT CUTS IN BENDERS' DECOMPOSITION 7 and demonstrate that for some subsequence fx (i) g of fx (i) g (6) lim i!1?> (i?1) T (x (i)? x (i?1) ) = 0: We do this by showing in Lemma 3.9 that for some subsequence fx (i) g of fx (i) g (7) lim inf? > (i?1) T (x (i)? x (i?1) ) 0: Since by virtue of Lemma 3.4 we get lim inf > (i?1) T (x (i)? x (i?1) ) 0; lim sup? > (i?1) T (x (i)? x (i?1) ) 0; which with (7) yields (6). The proof of Lemma 3.9 relies on the fact that some subsequence of fx i g must lie in a bounded polyhedral set. The inequality (7) is then proven by appealing to the following two lemmas for polyhedral sets. Lemma 3.7. Let A = fxj b > j x j; 1 j mg; and suppose b > j x = j ; 1 j m: Then x + y 2 A implies that y is in the recession cone of A. Proof. Since x + y 2 A it follows that for every j = 1; 2; : : : ; m, b > j y 0, and so for any x 2 A, 0 and any j, (8) b > j (x + y) = b > j x + b> j y j + b > j y j ; which shows that y is in the recession cone of A. Lemma 3.8. Suppose fx i g is a sequence of points in G = fxj b > j x j; 1 j mg; converging to x, with b > j x = j if 1 j k b > j x < j otherwise. Then there is some > 0 and N such that for every y in the recession cone of fxj b > j x j; 1 j kg Proof. Let i > N ) x i + y kyk 2 G: A = fxj b > j x j; 1 j kg; and dene C to be the recession cone of A. Since G = A \ fxj b > j x j; k < j mg

8 8 ZAKERI, PHILPOTT, RYAN every member of fx i g lies in A, and satises (9) x i + y 2 A; 0; y 2 C: Now choose = 1 2 min k<jm inffkz? x k : b > j z jg > 0 if this is nite, (or set = 1, otherwise). If N is chosen suciently large so that then for every y 2 C (10) i > N =) kx i? x k < ; x i + y kyk? x kx i? x k + < 2: If b > j z j for any j = k +1; : : : ; m, then kz? x k 2, and so setting z = x i + y kyk it follows from (10) that Furthermore, by (9) b > j (x i + y kyk ) j; k < j m: b > j (x i + y kyk ) j; 1 j k; and so x i + y kyk 2 G. We now apply the above lemmas to prove Lemma 3.9. The proof proceeds by showing that for an appropriately chosen convergent subsequence fx (i) g, the projection of > (i) T in the direction of x (i+1)? x (i) is uniformly bounded. Once this is established the conclusion of Lemma 3.9 is immediate. Lemma 3.9. Suppose f(x (i) ; (i) )g is a subsequence of the sequence of solutions generated by the inexact cut algorithm and let f (i) g be the corresponding approximately optimal solutions to the dual of SP(x (i) ). Then there exists a subsequence of fx (i) g, (indexed by (i)) such that x (i)! x and lim inf? > (i) T (x (i+1)? x (i) ) 0: Proof. Since X is bounded, convex, and polyhedral, the (nite) collection of all relative interiors of the faces of X partition it ([14, Theorem 18.2]). Hence there is a subsequence of fx (i) g; indexed by (i); such that fx (i) g lies in the relative interior of a face G of X, and converges to a point x 2 G. Since G is polyhedral we may represent it by G = fxjb > i x i; 1 i mg: If x is in the interior of G then dene C to be IR n. In this case there is clearly some > 0 such that for every y 2 C, and i suciently large, x (i) + y kyk 2 G. Otherwise, without loss of generality dene k to be such that b > i x = i ; 1 i k; b > i x < i ; k < i m;

9 INEXACT CUTS IN BENDERS' DECOMPOSITION 9 and dene C to be the recession cone of fxj b > i x i; 1 i kg: By Lemma 3.8 there is some > 0 such that for every y 2 C, and i suciently large (11) x (i) + y kyk 2 G: Since we are concerned here with the limiting behaviour of fx (i) g we shall henceforth assume that (11) holds for all members of fx (i) g. We now show that we can choose a subsequence fx (i) g of fx (i) g such that x (i?1)? x (i) 2 C. When C = IR n this is trivial. Otherwise we construct the subsequence by choosing x (k) given x (k?1) in the following manner. Since x (k?1) 2 ri (G) there exists > 0 such that (x (k?1) + B) \ a (G) G; where B is the open unit ball and a (G) is the ane hull of G. Now for (i) large enough we have that x? x (i) 2 B, and so if we choose (k) = (i) then x + (x (k?1)? x (k) ) = x (k?1) + x? x (i) 2 G; since x (k?1) + x? x (i) is also in a (G). Therefore x + (x (k?1)? x (k) ) 2 fxj b > i x i; 1 i kg; and by Lemma 3.7 we deduce that (12) x (k?1)? x (k) 2 C: Since x (i) 2 ri (G) this construction may be repeated to yield an innite sequence. Applying Lemma 2.1 to members of fx (i) g; we have for any x that: If we choose Q(x) Q(x (i?1) )? > (i?1) T (x? x (i?1))? (i?1) : x = x (i?1) + x (i?1)? x (i) x(i?1)? x (i) then by Lemma 3.8, (12) and (11) yield x 2 G and give? > (i?1) T x (i?1)? x (i) x(i?1)? x (i) Q(x)? Q(x (i?1) ) + (i?1) sup Q(x)? inf Q(x) + (i?1) x2g x2g If we set M = sup x2g Q(x)? inf x2g Q(x) + 1 then since f i g is decreasing we obtain? > (i?1) T x (i?1)? x (i) x(i?1)? x (i) M : Therefore? > (i?1) T (x (i)? x (i?1) )? M x(i?1)? x (i)

10 10 ZAKERI, PHILPOTT, RYAN which implies lim inf? > (i?1) T (x (i)? x (i?1) ) 0: Theorem If X = fx 0 : Ax = bg is bounded and X domq then the inexact cut algorithm terminates in a nite number of iterations with a -optimal solution of P. Proof. The proof is similar to Theorem 3.6. We will start by showing U i?l i # 0. If there exists m such that for all i m; i v i then Lemma 3.3 delivers the conclusion. Otherwise, there exists a subsequence f(x (i) ; (i) )g such that (i) < v (i) and since X is bounded, without loss of generality we may assume that f(x (i) ; (i) )g converges, to (x ; ), say. Then by Lemma 3.4 Thus lim inf > (i?1) T (x (i)? x (i?1) ) 0: (13) lim sup? > (i?1) T (x (i)? x (i?1) ) 0: Now we can apply Lemma 3.9 to extract a subsequence fx (i) g of fx (i) g such that (14) lim inf? > (i) T (x (i+1)? x (i) ) 0: From (13) and (14) we have? > (i) T (x (i+1)? x (i) )! 0: This yields U (i)? L (i)! 0 implying that the decreasing sequence fu i? L i g tends to 0 which then gives the result as in the proof of Theorem Dantzig-Wolfe Decomposition. It is well known that Benders' decomposition is dual to Dantzig-Wolfe decomposition. Therefore some form of inexact optimization procedure should apply to the latter algorithm, which mirrors the steps of the inexact cut algorithm described in section 2. In fact such a scheme has already been outlined in the literature by Kim and Nazareth [13] who discuss the computational advantages of using interior-point methods in such an approach. We digress briey in this section to explore the asymptotic convergence properties of such an algorithm. The dual problem of P can be formulated as D: maximize b > u + h > v subject to A > u + T > v c; W > v q: Suppose for the moment that the set fv j W > v qg is bounded with extreme points fv i j i = 1; 2; : : : ; Ng. Then Dantzig-Wolfe decomposition solves a restricted master problem MD: maximize b > u + P i ih > v i subject to A > u + P i it > v i c;

11 INEXACT CUTS IN BENDERS' DECOMPOSITION 11 P i i = 1; 0: The columns given by v i are generated iteratively by solving MD, obtaining optimal dual variables (x; ), and then solving the subproblem: SD(x): maximize (h >? x > T > )v subject to W > v q; T > vi to give a new column to be added to the restricted master problem, in the 1 event that this column has a positive reduced cost dened by (h >? x > T > )v i? : In our inexact Dantzig-Wolfe decomposition algorithm we rst choose a convergence tolerance, set an iteration counter i := 0, and choose some decreasing sequence f i g which converges to 0. We do not require that V = fv j W > v qg is bounded but following [13] we require an initial set of points fv 1 ; v 2 ; : : : ; v N g V so that MD has a feasible solution. The remaining steps of the algorithm are as follows. Inexact Dantzig-Wolfe Decomposition Algorithm While U i? L i > 1. Set i := i Solve MD to obtain (u i ; ) and dual variables x i and i. 3. Set L i := b > u i + P i ih > v i. 4. Perform an inexact optimization to generate a vector v i feasible for SD(x i ) such that (15) v > i (h? T x i ) + i > V (SD(x i )): 5. Set U i := minfu i?1 ; c > x i + v i > (h? T x i ) + i g. 6. If v i > T > vi (h? T x i ) > i then add the column to MD. 1 else set i := i + 1, x i+1 := x i, i+1 := i, L i+1 := L i, U i+1 := U i and go to Step 4. Here V (SD(x i )) is the optimal value of SD(x i ). Since the dual of SD(x i ) is easily seen to be SP(x i ), V (SD(x i )) = Q(x i ); and so Step 4 of this algorithm is identical to the same step of the inexact cut algorithm of section 2. In classical Dantzig-Wolfe decomposition each solution v i obtained for SD is an extreme point, of which there are a nite number, thus guaranteeing nite termination. In the inexact algorithm, this is no longer true. However, the theorem of the previous section may be invoked to yield the following corollary. Corollary 4.1. If X = fx 0 : Ax = bg is a bounded set and for every x 2 X; the problem SD(x) is bounded, then the inexact Dantzig-Wolfe algorithm terminates in a nite number of iterations with a -optimal solution of D. Since SD(x) will always have a feasible solution (if D does) the boundedness condition on SD(x) is equivalent to SP(x) being feasible, which is the relatively complete recourse assumption of the previous section. Although it seems natural in the context of Benders' decomposition, the boundedness condition on X is rather restrictive in the current context, and fails to hold in the case when A and b are both identically zero,

12 12 ZAKERI, PHILPOTT, RYAN a situation which is typical in most applications of Dantzig-Wolfe decomposition. We require X to be bounded to enable the extraction of convergent subsequences. When A and b are both identically zero, this can be guaranteed by imposing a condition that the restricted master problems solved in the course of the algorithm produce a sequence of optimal dual variables which lies in some compact set. This might prove to be dicult to verify a priori, but its counterpart in Benders' decomposition seems a natural assumption that can be imposed if necessary by placing a priori bounds on the components of x. 5. Computational Results. We conclude this paper by presenting some computational results of applying the inexact cut algorithm to a set of problems which arise in the planning of hydro-electric power generation. The problems are all based on a multi-stage stochastic programming model developed by Broad [3], in which the New Zealand electricity system is represented as a side-constrained network model with nodes representing hydro-electric reservoirs, hydroelectric generation facilities, thermal generation facilities, and demand points, and arcs with constant losses representing the transmission network. The model consists of 6 reservoirs, 6 thermal stations and 22 hydro stations. Each stage is a week long, and demand in each week is represented by a piecewise linear load duration curve with three linear sections. At each stage a number of random outcomes are possible for the inows into the reservoirs in the current week. We impose a lower bound on the nal level of the reservoirs at the end of the nal stage. This lower bound is a xed fraction of the original initial level of the reservoirs in the very rst stage. Additional side constraints include DC load ow constraints which govern the transmission ows and conservation of water ow equations in hydro-electric systems. The linear program for each stage has 273 variables and 120 constraints. The objective in each stage is to minimize the cost of thermal electricity generation over the current week plus the expected future cost of thermal generation. The multi-stage models described above were converted into two-stage and threestage problems by aggregating consecutive stages into large problems. For example, to obtain a two stage problem from a multi-stage problem we aggregate each second stage problem and its descendants into a single deterministic equivalent linear program. Table 1 presents the sizes and characteristics of the resulting problems. Although the problems in each pair have the same sizes they dier in the lower bounds imposed on the nal levels of the reservoirs. Column 1 of Table 1 gives the problem identiers, column 2 presents the number of stages in the problem (after aggregation), and column 3 contains the size of the deterministic equivalent problem. Column 4 contains the size of each subproblem after aggregation. Column 5 contains the number of stages in the problem before aggregation. For example, problem P5 is a 6 stage problem, in which we have aggregated the last 5 stages to produce a 2 stage problem. The last column contains the number of random inows at each stage. the When applied to stochastic programs, Benders' decomposition and the inexact cut algorithm must solve a number of subproblems in each iteration. The resulting cut has as coecients the expectation of the subproblem coecients. In the case of three-stage problems we traverse the scenario tree depth rst using the fast pass procedure (see [10], and [16]). Benders' decomposition and the inexact cut algorithm were both implemented using CPLEX 4.0's primal-dual interior point solver, baropt, to solve the subproblems

13 INEXACT CUTS IN BENDERS' DECOMPOSITION 13 Problem # agg stg P Subproblem # stg # scen. per stg P1 2 10,920x24,843 1,200x2, P2 2 10,920x24,843 1,200x2, P3 2 14,520x33,033 4,800x10, P4 2 14,520x33,033 4,800x10, P5 2 43,680x99,372 14,520x33, P6 2 43,680x99,372 14,520x33, P7 3 14,520x33, x P8 3 14,520x33, x P9 3 43,680x99,372 4,800x10, P ,680x99,372 4,800x10, P ,154x42,966 1,404x1, Table 1 Problem sizes Problem # BD cuts # inex cuts BD time inex time % improvement P % P % P % P % P % P % P % P % P % P % P % Table 2 Performance comparison and the simplex solver, optimize, to solve the rst stage problems. We do not apply the crossover operation (hybbaropt) in solving the subproblems. For the inexact cut algorithm we start with = 10; 000 and reduce it by a factor of 10 at each iteration, and terminate baropt when both primal and dual feasibility is attained in the subproblem and the dual objective is at most away from the primal objective. The termination criteria for both algorithms requires a relative gap of 10?5 between the upper and the lower bounds (i.e. we stop when U?L U < 10?5 ). Table 2 contains a comparison of the computational results for the two methods. All times are reported on a SGI Power Challenge. Column 1 contains the problem identiers. Columns 2 and 3 contain the number of cuts under the exact and inexact cut algorithms respectively. Columns 4 and 5 contain the timing in seconds for the exact and inexact methods respectively. The last column contains the percentage of improvement of the inexact cut algorithm over the exact Benders' decomposition algorithm. The entries in this column are calculated as ( exact time?inexact time exact time ) 100%. Note that traditionally the subproblems are not aggregated and they are solved using the (dual) simplex method with warm starting. For some problems this is more

14 14 ZAKERI, PHILPOTT, RYAN ecient than using an interior point method on an aggregated subproblem although in other cases (e.g. P3, P7, and P11) we experienced signicant speed up by aggregating and using the interior point method versus Benders' decomposition with warm starting simplex. It may be possible to warm start the interior point method eectively, when solving the subproblems using recent research developed to this end (see for example [17, 8]). 6. Conclusions. In every one of our problems the inexact cut algorithm improved the time to obtain a solution with the same accuracy as that of the Benders' decomposition algorithm. In our experiments, the choice of f i g is made independently of the problem. Further improvements in speed can be achieved by making a problem dependent choice of f i g. In Table 2 the greatest improvements were obtained in cases where the Benders' decomposition requires a large number of cuts. In these cases we observed that often during the course of the exact algorithm the lower bounds did not change over the course of several iterations. The inexact cut algorithm does not display this behaviour, and reaches an approximately optimal solution with fewer cuts. This suggests that computing cuts inexactly is a promising and simple improvement strategy for operations research practitioners who observe similar behaviour in Benders' decomposition applied to their stochastic linear programming models. REFERENCES [1] O. Bahn, O. Du Merle, J.-L. Goffin, and J.-P. Vial, A cutting plane method from analytic centers for stochastic programming, Mathematical Programming, Series B, 69 (1995), pp. 45{73. [2] J. F. Benders, Partitioning procedures for solving mixed-variables programming problems, Numerische Mathematik, 4 (1962), pp. 238{252. [3] K. P. Broad, Power generation planning using scenario aggregation, master's thesis, University of Auckland, Auckland, New Zealand, [4] G. B. Dantzig and P. Wolfe, Decomposition principle for linear programs, Operations Research, 8 (1960), pp. 101{111. [5], The decomposition algorithm for linear programs, Econometrica, 29 (1961), pp. 767{ 778. [6] E. Flippo and A. Rinnooy Kan, Decompositon in general mathematical programming, Mathematical Programming, 60 (1993), pp. 361{382. [7] A. M. Geoffrion, Generalized benders' decomposition, Journal of Optimization Theory and Applications, 10 (1972), pp. 237{260. [8] J. Gondzio, Warm start of the primal-dual method applied in the cutting plane scheme, Mathematical Programming, (1997), p. to appear. [9] W. Hogan, Application of general convergence theory for outer approximation algorithms, Mathematical Programming, 5 (1973), pp. 151{168. [10] J. Jacobs, G. Freeman, J. Grygier, D. Morton, G. Schultz, K. Staschus, and J. Stedinger, SOCRATES: a system for scheduling hydroelectric generation under uncertainty, Annals of Operation Research, 59 (1995), pp. 99{133. [11] P. Kall and S. W. Wallace, Stochastic Programming, John Wiley & Sons, [12] J. E. Kelley Jr., The cutting-plane method for convex programs, Journal of the SIAM, 8 (1960), pp. 703{712. [13] K. Kim and J. L. Nazareth, The decomposition principle and algorithm for linear programming, Linear Algebra and its Applications, 152 (1991), pp. 119{133. [14] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, [15] R. VanSlyke and R. J.-B. Wets, L-shaped linear programs with applications to optimal control and stochastic linear programs, SIAM Journal of Applied Mathematics, 17 (1969), pp. 638{ 663. [16] R. J. Wittrock, Advances in a nested decomposition algorithm for solving staircase linear programs, Tech. Report SOL 83-2, Systems Optimization Laboratory, Department of Operations Research, Stanford University, 1983.

15 INEXACT CUTS IN BENDERS' DECOMPOSITION 15 [17] G. Zakeri, D. Ryan, and A. Philpott, Techniques for solving large scale set partitioning problems, Computational Optimization and Applications, (1997), p. submitted.

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b

Bounding in Multi-Stage. Stochastic Programming. Problems. Olga Fiedler a Andras Prekopa b R utcor Research R eport Bounding in Multi-Stage Stochastic Programming Problems Olga Fiedler a Andras Prekopa b RRR 24-95, June 1995 RUTCOR Rutgers Center for Operations Research Rutgers University P.O.

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Robert M. Freund May 2, 2001 Block Ladder Structure Basic Model minimize x;y c T x + f T y s:t: Ax = b Bx +

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial

More information

Decomposition methods in optimization

Decomposition methods in optimization Decomposition methods in optimization I Approach I: I Partition problem constraints into two groups: explicit and implicit I Approach II: I Partition decision variables into two groups: primary and secondary

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Benders' Method Paul A Jensen

Benders' Method Paul A Jensen Benders' Method Paul A Jensen The mixed integer programming model has some variables, x, identified as real variables and some variables, y, identified as integer variables. Except for the integrality

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

MIDAS: A Mixed Integer Dynamic Approximation Scheme

MIDAS: A Mixed Integer Dynamic Approximation Scheme MIDAS: A Mixed Integer Dynamic Approximation Scheme Andy Philpott, Faisal Wahid, Frédéric Bonnans May 7, 2016 Abstract Mixed Integer Dynamic Approximation Scheme (MIDAS) is a new sampling-based algorithm

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Robert M. Freund April 29, 2004 c 2004 Massachusetts Institute of echnology. 1 1 Block Ladder Structure We consider

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

Midterm 1. Every element of the set of functions is continuous

Midterm 1. Every element of the set of functions is continuous Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective

More information

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods

More information

Totally Unimodular Stochastic Programs

Totally Unimodular Stochastic Programs Totally Unimodular Stochastic Programs Nan Kong Weldon School of Biomedical Engineering, Purdue University 206 S. Martin Jischke Dr., West Lafayette, IN 47907 nkong@purdue.edu Andrew J. Schaefer Department

More information

Citation for published version (APA): van der Vlerk, M. H. (1995). Stochastic programming with integer recourse [Groningen]: University of Groningen

Citation for published version (APA): van der Vlerk, M. H. (1995). Stochastic programming with integer recourse [Groningen]: University of Groningen University of Groningen Stochastic programming with integer recourse van der Vlerk, Maarten Hendrikus IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound Samir Elhedhli elhedhli@uwaterloo.ca Department of Management Sciences, University of Waterloo, Canada Page of 4 McMaster

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Some Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992

Some Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992 Some Perturbation Theory for Linear Programming James Renegar School of Operations Research and Industrial Engineering Cornell University Ithaca, New York 14853 e-mail: renegar@orie.cornell.edu October

More information

[4] T. I. Seidman, \\First Come First Serve" is Unstable!," tech. rep., University of Maryland Baltimore County, 1993.

[4] T. I. Seidman, \\First Come First Serve is Unstable!, tech. rep., University of Maryland Baltimore County, 1993. [2] C. J. Chase and P. J. Ramadge, \On real-time scheduling policies for exible manufacturing systems," IEEE Trans. Automat. Control, vol. AC-37, pp. 491{496, April 1992. [3] S. H. Lu and P. R. Kumar,

More information

Notes on Iterated Expectations Stephen Morris February 2002

Notes on Iterated Expectations Stephen Morris February 2002 Notes on Iterated Expectations Stephen Morris February 2002 1. Introduction Consider the following sequence of numbers. Individual 1's expectation of random variable X; individual 2's expectation of individual

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D-14195 Berlin Georg Ch. Pug Andrzej Ruszczynski Rudiger Schultz On the Glivenko-Cantelli Problem in Stochastic Programming: Mixed-Integer

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7, D Berlin Konrad-use-entrum für Informationstechnik Berlin Takustraße 7, D-14195 Berlin Rudiger Schultz A Note on Preprocessing via Fourier - Motzkin Elimination in Two-Stage Stochastic Programming Preprint SC 96{32

More information

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

A Dual Variant of Benson s Outer Approximation Algorithm

A Dual Variant of Benson s Outer Approximation Algorithm A Dual Variant of Benson s Outer Approximation Algorithm Matthias Ehrgott Department of Engineering Science The University of Auckland, New Zealand email: m.ehrgott@auckland.ac.nz and Laboratoire d Informatique

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints

Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Transaction E: Industrial Engineering Vol. 16, No. 1, pp. 1{10 c Sharif University of Technology, June 2009 Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Abstract. K.

More information

The Great Wall of David Shin

The Great Wall of David Shin The Great Wall of David Shin Tiankai Liu 115 June 015 On 9 May 010, David Shin posed the following puzzle in a Facebook note: Problem 1. You're blindfolded, disoriented, and standing one mile from the

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig Uniformly distributed measures in Euclidean spaces by Bernd Kirchheim and David Preiss Preprint-Nr.: 37 1998 Uniformly Distributed

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

An exploration of matrix equilibration

An exploration of matrix equilibration An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

Generation and Representation of Piecewise Polyhedral Value Functions

Generation and Representation of Piecewise Polyhedral Value Functions Generation and Representation of Piecewise Polyhedral Value Functions Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial and Systems Engineering,

More information

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.

Structural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley. I 1947 Center St. Suite 600 Berkeley, California 94704-1198 (510) 643-9153 FAX (510) 643-7684 INTERNATIONAL COMPUTER SCIENCE INSTITUTE Structural Grobner Basis Detection Bernd Sturmfels and Markus Wiegelmann

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

Branching Rules for Minimum Congestion Multi- Commodity Flow Problems

Branching Rules for Minimum Congestion Multi- Commodity Flow Problems Clemson University TigerPrints All Theses Theses 8-2012 Branching Rules for Minimum Congestion Multi- Commodity Flow Problems Cameron Megaw Clemson University, cmegaw@clemson.edu Follow this and additional

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Stochastic Integer Programming An Algorithmic Perspective

Stochastic Integer Programming An Algorithmic Perspective Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

A theorem on summable families in normed groups. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle.

A theorem on summable families in normed groups. Dedicated to the professors of mathematics. L. Berg, W. Engel, G. Pazderski, and H.- W. Stolle. Rostock. Math. Kolloq. 49, 51{56 (1995) Subject Classication (AMS) 46B15, 54A20, 54E15 Harry Poppe A theorem on summable families in normed groups Dedicated to the professors of mathematics L. Berg, W.

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

2 RODNEY G. DOWNEY STEFFEN LEMPP Theorem. For any incomplete r.e. degree w, there is an incomplete r.e. degree a > w such that there is no r.e. degree

2 RODNEY G. DOWNEY STEFFEN LEMPP Theorem. For any incomplete r.e. degree w, there is an incomplete r.e. degree a > w such that there is no r.e. degree THERE IS NO PLUS-CAPPING DEGREE Rodney G. Downey Steffen Lempp Department of Mathematics, Victoria University of Wellington, Wellington, New Zealand downey@math.vuw.ac.nz Department of Mathematics, University

More information

Applied Mathematics &Optimization

Applied Mathematics &Optimization Appl Math Optim 29: 211-222 (1994) Applied Mathematics &Optimization c 1994 Springer-Verlag New Yor Inc. An Algorithm for Finding the Chebyshev Center of a Convex Polyhedron 1 N.D.Botin and V.L.Turova-Botina

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

1 Solutions to selected problems

1 Solutions to selected problems 1 Solutions to selected problems 1. Let A B R n. Show that int A int B but in general bd A bd B. Solution. Let x int A. Then there is ɛ > 0 such that B ɛ (x) A B. This shows x int B. If A = [0, 1] and

More information

A characterization of consistency of model weights given partial information in normal linear models

A characterization of consistency of model weights given partial information in normal linear models Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Lagrangian Relaxation in MIP

Lagrangian Relaxation in MIP Lagrangian Relaxation in MIP Bernard Gendron May 28, 2016 Master Class on Decomposition, CPAIOR2016, Banff, Canada CIRRELT and Département d informatique et de recherche opérationnelle, Université de Montréal,

More information

LECTURE 10 LECTURE OUTLINE

LECTURE 10 LECTURE OUTLINE LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,

More information

Limit Analysis with the. Department of Mathematics and Computer Science. Odense University. Campusvej 55, DK{5230 Odense M, Denmark.

Limit Analysis with the. Department of Mathematics and Computer Science. Odense University. Campusvej 55, DK{5230 Odense M, Denmark. Limit Analysis with the Dual Ane Scaling Algorithm Knud D. Andersen Edmund Christiansen Department of Mathematics and Computer Science Odense University Campusvej 55, DK{5230 Odense M, Denmark e-mail:

More information

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to

Coins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the

More information

WARDROP EQUILIBRIA IN AN INFINITE NETWORK

WARDROP EQUILIBRIA IN AN INFINITE NETWORK LE MATEMATICHE Vol. LV (2000) Fasc. I, pp. 1728 WARDROP EQUILIBRIA IN AN INFINITE NETWORK BRUCE CALVERT In a nite network, there is a classical theory of trafc ow, which gives existence of a Wardrop equilibrium

More information

Reformulation of capacitated facility location problems: How redundant information can help. Karen Aardal. Utrecht University. P.O.

Reformulation of capacitated facility location problems: How redundant information can help. Karen Aardal. Utrecht University. P.O. Reformulation of capacitated facility location problems: How redundant information can help Karen Aardal Department of Computer Science Utrecht University P.O. Box 80089 3508 TB Utrecht, The Netherlands

More information

A Parallel Approximation Algorithm. for. Positive Linear Programming. mal values for the primal and dual problems are

A Parallel Approximation Algorithm. for. Positive Linear Programming. mal values for the primal and dual problems are A Parallel Approximation Algorithm for Positive Linear Programming Michael Luby Noam Nisan y Abstract We introduce a fast parallel approximation algorithm for the positive linear programming optimization

More information

BUNDLE-BASED DECOMPOSITION FOR LARGE-SCALE CONVEX OPTIMIZATION: ERROR ESTIMATE AND APPLICATION TO BLOCK-ANGULAR LINEAR PROGRAMS.

BUNDLE-BASED DECOMPOSITION FOR LARGE-SCALE CONVEX OPTIMIZATION: ERROR ESTIMATE AND APPLICATION TO BLOCK-ANGULAR LINEAR PROGRAMS. BUNDLE-BASED DECOMPOSITION FOR LARGE-SCALE CONVEX OPTIMIZATION: ERROR ESTIMATE AND APPLICATION TO BLOCK-ANGULAR LINEAR PROGRAMS Deepankar Medhi Computer Science Telecommunications Program University of

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

Analytic Center Cutting-Plane Method

Analytic Center Cutting-Plane Method Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives Subgradients subgradients strong and weak subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE364b, Stanford University Basic inequality recall basic inequality

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 25 November 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Sara Lumbreras & Andrés Ramos July 2013 Agenda Motivation improvement

More information

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying

Werner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying Stability of solutions to chance constrained stochastic programs Rene Henrion Weierstrass Institute for Applied Analysis and Stochastics D-7 Berlin, Germany and Werner Romisch Humboldt University Berlin

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 26 June 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

Hybrid Systems Course Lyapunov stability

Hybrid Systems Course Lyapunov stability Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information