Department of Social Systems and Management. Discussion Paper Series
|
|
- Charla Johnston
- 6 years ago
- Views:
Transcription
1 Department of Social Systems and Management Discussion Paper Series No Outer Approimation Method for the Minimum Maimal Flow Problem Yoshitsugu Yamamoto and Daisuke Zenke April 2005 UNIVERSITY OF TSUKUBA Tsukuba, Ibaraki JAPAN
2 OUTER APPROXIMATION METHOD FOR THE MINIMUM MAXIMAL FLOW PROBLEM Yoshitsugu Yamamoto University of Tsukuba Daisuke Zenke University of Tsukuba April 22, 2005 Abstract The minimum maimal flow problem is the problem of minimizing the flow value on the set of maimal flows of a given network. The optimal value indicates how inefficiently the network can be utilized in the presence of some uncontrollability. After etending the gap function characterizing the set of maimal flows, we reformulate the problem as a D.C. optimization problem, and then propose an outer approimation algorithm. The algorithm, based on the idea of ε-optimal solution and local search technique, terminates after finitely many iterations with the optimal value of the problem. Keywords: Network flow, minimum maimal flow, optimization over the efficient set, D.C. optimization, outer approimation, global optimization. 1. Introduction We are given a network (V, s, t, E, c), where V is the set of m + 2 nodes containing the source node s and the sink node t, E is the set of n arcs and c is the n-dimensional column vector whose hth element c h is the capacity of arc h. The set of feasible flows, denoted by X, is given by X = { R n A = 0, 0 c }, (1.1) where m n matri A is the incidence matri whose (v, h) element a vh is +1 if arc h leaves node v a vh = 1 if arc h enters node v 0 otherwise. The well-known conventional maimum flow problem is ma d s.t. X, where d is the n-dimensional row vector whose hth element is +1 if arc h leaves source s d h = 1 if arc h enters source s 0 otherwise. Definition 1.1 (minimum maimal flow problem) A vector X is said to be a maimal flow if there is no y X such that y and y. We use X M to denote the set of maimal flows, i.e., X M = { X there is no y X such that y and y }. (1.2) 1
3 A minimum maimal flow problem, abbreviated to (mmf ), is defined as min d (mmf ) s.t. X M. The purpose of this paper is to propose an algorithm for (mmf ), which is based on the outer approimation method (OA method for short) for a D.C. optimization problem. Note that D.C. stands for difference of two conve sets (or functions), which will be defined in Section 3. Below is our motivation to consider (mmf ). When we attempt to solve a maimum flow problem on condition that we should not be allowed to decrease arc flows, we often fail to obtain the maimum flow and are obliged to put up with a maimal flow. Under this restricted controllability, the minimum flow value attained by a maimal flow, i.e., the optimal value of (mmf ), indicates how inefficiently the network can be utilized. Figure 1 highlights the difference between maimum flow and minimum maimal flow. For network (a), both are 3. On the other hand, for network (b), the minimum maimal flow value reduces to 2 while the maimum flow value remains 3. The minimum maimal flow value does not monotonically increase as the capacities grow. 2 v v 1 1 s 1 t s 2 t 1 2 v 2 network (a) 1 2 v 2 network (b) Figure 1: Maimum flow vs. minimum maimal flow Shi-Yamamoto [24] first raised (mmf ) and proposed an algorithm. Several algorithms for (mmf ) combining local search and global optimization technique have been proposed in e.g., Gotoh-Thoai-Yamamoto [15] and Shigeno-Takahashi-Yamamoto [25]. An approach of D.C. optimization is found in Muu-Shi [18]. The difficulty of (mmf ) is mainly due to the nonconveity of X M. Indeed, (mmf ) embraces the minimum maimal matching problem, which is N P-hard (see e.g., Garay-Johnson [14]). It is readily seen that (mmf ) is a special case of the optimization problem over the efficient set of a multi objective optimization, which was first studied by Philip [20]. Applying a well-known result of multi objective optimization, X M is characterized as follows: The point is in X M if and only if there eists λ R p++ such that is an optimal solution of ma λ (SC(λ)) s.t. X. Therefore we can easily obtain a point X M by solving (SC(λ)) for an arbitrarily chosen λ R p++. Furthermore, for a sufficiently large M > 0 the following set Λ can substitute 2
4 for R p++ above: Λ = { λ R p++ λ e, λ1 = M }. (1.3) Shigeno-Takahashi-Yamamoto [25] showed that n 2 suffices for M defining Λ of (1.3) for (mmf ). It is also known and easily seen by applying the parametric optimization technique for (SC(λ)) that X M is a connected union of several faces of X. About the optimization problem over the efficient set, the reader should refer to e.g., White [31], Sawaragi- Nakayama-Tanino [22], Steuer [26] and Yamamoto [33]. For solution methods, see Benson [4 6], Bolintineanu [7], Ecker-Song [11], Fülöp [13], Dauer-Fosnaugh [10], Thach-Konno- Yokota [27], Sayin [23], Phong-Tuyen [21], Thoai [28], Muu-Luc [17], An-Tao-Thoai [3] and An-Tao-Muu [1, 2]. For simplicity we assume throughout this paper that the given network satisfies the following three assumptions. Assumption 1.2 (i) Each capacity takes a positive integer, i.e., c h Z and c h > 0 for each h E. (ii) There is some point X such that 0. (iii) There is no t-s-path. Note that Assumption 1.2 (i) ensures the integrality of vertices of X as well as the optimal value of (mmf ). Note also that 0 X M by Assumption 1.2 (ii), and min{ d X } = 0 by Assumption 1.2 (iii). In the net section we first introduce a gap function. We then etend the domain of the gap function to R n and reformulate (mmf ). Section 3 is devoted to a review of the OA method for D.C. optimization problems. Based on this method, we propose an algorithm for (mmf ) in Section 4, in which we introduce an ε-optimal solution and investigate the proper range of the parameter ε for the optimality condition. To make the algorithm more efficient, we incorporate a local search technique. Finally, we show that the algorithm with the local search technique terminates after finitely many iterations. Further works will be described in the last section. Throughout this paper we use the following notations: R n denotes the set of n-dimensional column vectors. Let R n + = { R n 0 } and R n ++ = { R n > 0 }. Let R n denote the set of n-dimensional row vectors, R n+ and R n++ are defined in the similar way. We use e to denote the row vector of ones, 1 to denote the column vector of ones, and e i to denote the ith unit row or column vector of an appropriate dimension. Let I denote the identity matri of an appropriate size. We use a and A to denote the transposed vector of a and the transposed matri of A, respectively. For a set S, we denote the interior of S by int S, the closure of S by cl S, and the relative boundary of S by S. We use P V to denote the set of vertices of a polyhedron P. For two vectors v and w, let [v, w] denote the line segment with endpoints v and w, and let (v, w] = [v, w]\{v}. Also [v, w) and (v, w) are defined in the similar way. 2. Reformulation of (mmf ) by the Etended Gap Function It is known that the gap function g : R n [, + ] given by defines the set of maimal flows X M as g() = ma{ ey y X, y } e, (2.1) X M = { X g() 0 }. 3
5 Then we can rewrite (mmf ) as min d (mmf ) s.t. X, g() 0. The function g has some nice properties such piecewise linearity and concavity; for more information, see e.g., Benson [4] and White [32]. The domain of g, denoted by dom g, is the set { R n g() > }. When we apply the OA method to (mmf ), we need to evaluate g at points outside of X. Unless there is a point y X satisfying y v, g(v) takes, and hence no information is available about how away the point v is from the domain of g. Then we etend the gap function g to R n in this section. The etended gap function ḡ : R n R is defined as ḡ() = ma{ ey βt y X, y + t, t 0 } e, (2.2) where the n-dimensional row vector β will be specified later. Clearly ḡ is also a piecewise linear concave function. The following theorem in Yamamoto-Zenke [34] shows that ḡ is an etension of g. Theorem 2.1 (i) The domain of ḡ is R n for any β 0. (ii) If β ne then ḡ = g on the domain of g. Proof: See Appendi for the proof. By Theorem 2.1, fiing β = ne, we can replace the constraint g() 0 in (mmf ) with ḡ() 0 to obtain an equivalent formulation of (mmf ): min d (mmf ) s.t. X, ḡ() 0, which is equivalent to min d (mmf ) s.t. X\int Ḡ, where Ḡ = { R n ḡ() 0 }. (2.3) Note that Ḡ is a conve set since ḡ is a concave function. By the definition of ḡ, it is clear that ḡ() 0 for all X, i.e., X Ḡ. Since Assumption 1.2 (ii) implies 0 X\X M, we see that ḡ(0) = g(0) > 0, i.e., 0 int Ḡ. Additionally we have the following lemma. Lemma 2.2 ḡ() > 0 for every point in the relative interior of X. Proof: Let be a point in the relative interior of X, i.e., A = 0 and 0 < < c. Letting = (1+ε) for a sufficiently small ε > 0, we see that A = 0 and 0 c, i.e., X and. Therefore ḡ() = g() e( ) = εe > Outer Approimation Method for D.C. Optimization Problems A set S is said to be a D.C. set if there are two conve sets Q and R such that S = Q\R. The optimization problem on a D.C. set is called the D.C. optimization problem, which is 4
6 studied in e.g., Tuy [29, 30] and Horst-Tuy [16]. In this section we eplain the OA method for the canonical form D.C. optimization problem, abbreviated to (CDC), which is defined as min p (CDC) s.t. D, h() 0, where p R n is the cost vector, D R n is a nonempty compact conve set and h : R n R {+ } is a conve function. We assume that int { R n h() 0 } = { R n h() < 0 }. Defining the conve set H = { R n h() 0 }, (CDC) can be written as min p (CDC) s.t. D\int H, and hence (CDC) is a D.C. optimization problem. To make this problem simple we further assume that 0 D int H, and min{ p D } = 0. (3.1) Note that (CDC) reduces to (mmf ) when D = X, H = Ḡ and p = d Regularity and optimality condition Problem (CDC) is said to be regular when D\int H = cl (D\H). (3.2) Figure 2 shows an eample of (CDC) that is not regular, where cl (D\H). D\int H, while H D p Figure 2: The case where (CDC) is not regular The regularity assumption yields the optimality condition Theorem 3.1, which was given by Horst-Tuy [16]. To make this paper self-contained, we give the proof in Appendi. In the followings we denote D(η) = { D p η }, (3.3) for η R. Theorem 3.1 Let be a feasible solution of (CDC). D(p ) H. Proof: See Appendi for the proof. Then is an optimal solution if 5
7 3.2. OA method for (CDC) Let be an optimal solution of (CDC) and k D\int H be the incumbent at iteration k. In the OA method, we construct polytopes P 0, P 1,, P k, such that P 0 P 1 P k D(p ). If p k = 0, we have done by (3.1). In the case where p k > 0, we check the optimality condition D(p k ) H by evaluating h(v) at each verte v of P k. Namely, if h(v) 0 for each verte v of P k, meaning P k H, then k solves (CDC). Otherwise we construct P k+1 by adding some linear inequality to P k. Here we describe the OA method for (CDC). /** OA method for (CDC) **/ 0 (initialization) Find an initial feasible solution 0 of (CDC) and construct an initial polytope P 0 such that P 0 D(p 0 ). Compute the verte set PV 0 of P 0. Set k := 0. k (iteration k) Solve ma{ h(v) v PV k } to obtain vk. k1 (termination) If either p k = 0 or h(v k ) 0, meaning P k H, then stop. (The current incumbent k is an optimal solution of (CDC)). Otherwise, obtain the point k [0, v k ) H. k2 (cutting the polytope) If k D, set k+1 := k and P k+1 := P k { R n l() 0 } for some function l : R n R such that l(v k ) > 0 and l() 0 for all D(p ). If k D, set k+1 := k and P k+1 := P k { R n p p k+1 }. k3 Compute the verte set P k+1 V of P k+1. Set k := k + 1 and go to k. Remark 3.2 Note that adding a linear inequality to P k makes P k+1 and the verte set PV k of P k is at hand. Subroutines for computing the verte set P k+1 V from the knowledge of PV k are provided in e.g., Chen-Hansen-Jaumard [8], Subsection 7.4 of Padberg [19] and Chapter 18 of Chvátal [9]. Due to the possible degeneracy of P k, a sophisticated implementation should be needed e.g., Fukuda-Prodon [12]. 4. Outer Approimation Method for (mmf ) By Assumption 1.2 (ii)-(iii), we have 0 X int Ḡ, and min{ d X } = 0, (4.1) which correspond to (3.1). Then we can apply the OA method to (mmf ) if the regularity condition is met Regularity and optimality condition Unfortunately, the problem (mmf ) is not regular, hence we introduce a positive tolerance ε and consider, instead of (mmf ), min d (mmf ε ) s.t. X\int Ḡε, where Ḡ ε = { R n ḡ() ε }. (4.2) We call an optimal solution of (mmf ε ) an ε-optimal solution of (mmf ). First we show that any positive ε ensures the regularity of (mmf ε ). 6
8 Theorem 4.1 The problem (mmf ε ) is regular for any ε > 0. Proof: We show that X\int Ḡε = cl(x\ḡε), (4.3) holds for any ε > 0. ( ) Since X\int Ḡε is closed and X\int Ḡε X\Ḡε, we have X\int Ḡε = cl(x\int Ḡε) cl(x\ḡε). ( ) Let be an arbitrary point of X\int Ḡε and let N δ () denote its δ-neighborhood, i.e., N δ () = { R n < δ }. We show that there is always a point, say in N δ () (X\Ḡε). If ḡ() > ε then there eists γ > 0 such that ḡ( ) > ε for any point N γ () by the continuity of ḡ. This implies N γ () Ḡε, and hence int Ḡε. Therefore the assumption X\int Ḡε implies that X and ḡ() ε. By Theorem 2.1, we have ḡ() = g(). When ḡ() < ε, take as. Clearly = Ḡε and = N δ (), and we have done. When g() = ḡ() = ε, there is an optimal solution y of ma{ ey y X, y } such that e(y ) = ε, and hence y. Take λ such that 0 < λ < min{ 1, δ/ y } and let = λy + (1 λ). Since = λ y < δ, we see N δ (). Also we see that X by the conveity of X, and hence g( ) = ḡ( ) by applying Theorem 2.1 again. Since and, we have ḡ( ) = g( ) = ma{ ey y X, y } e < ma{ ey y X, y } e = e(y ) = ε. Therefore we see that Ḡε. This completes the proof. Net we discuss the upper bound of ε, which will be crucial for the convergence of the algorithm. Lemma 4.2 If ε (0, 1) then 0 int Ḡε, and (0, v) Ḡε for any point v such that ḡ(v) 0. Proof: We have ḡ(0) > 0 since 0 int Ḡ. Note that ḡ(0), which coincides with g(0), takes an integer value by the integrality property of X, and hence ḡ(0) 1. Then we have ḡ(0) > ε, i.e., 0 int Ḡε for any ε (0, 1). The continuity of ḡ ensures the last assertion. For the following lemma, we use δ s to denote the number of arcs leaving node s, i.e., δ s = { i d i = +1 }. (4.4) Lemma 4.3 Let and ε be an optimal solution and an ε-optimal solution of (mmf ), respectively. Then 0 d d ε εδ s. Proof: Since X and ḡ( ) 0, is a feasible solution of (mmf ε ), and hence d ε d. Let y ε be an optimal solution of ma{ ey y X, y ε }. Clearly y ε X M, i.e., y ε is a feasible solution of (mmf ), and hence d dy ε. We see that yεi εi ε for each i = 1,..., n, since y ε ε 0 and e(y ε ε) ε. That implies d(y ε ε) ε { i d i = +1 } = εδ s, hence d ε d dy ε d ε + εδ s. 7
9 Then d ε coin- Theorem 4.4 Let ε be an ε-optimal solution for some ε (0, 1/δ s ). cides with the optimal value of (mmf ). Proof: From Lemma 4.3 we see that 0 d d ε < 1. This inequality and the integrality of d gives the assertion. In the sequel we choose ε from the open interval (0, 1/δ s ). Note that ḡ( ε) ε holds for an ε-optimal solution ε of (mmf ). Therefore ḡ() 0 for any accumulation point of { ε} ε 0+. This observation leads to the following corollary. Corollary 4.5 Let { ε} ε 0+ be a sequence of ε-optimal solutions of (mmf ) for ε converging to 0 from above. Then any accumulation point of { ε} ε 0+ is an optimal solution of (mmf ). As seen in Theorem 3.1, letting X(η) = { X d η }, (4.5) for η R, the optimality condition of the OA method is X(d ε ) Ḡε for some ε X\int Ḡε. We can further rela this condition. Theorem 4.6 Let ε X\int Ḡε for some ε (0, 1/δ s ). If X( d ε 1 ) Ḡε ε > 0 then d ε coincides with the optimal value of (mmf ). for some Proof: Let and ε be an optimal solution and an ε-optimal solution of (mmf ), respectively. Since ε is a feasible solution of (mmf ε ), we have d ε d ε. It is also clear that d ε d. If d < d ε then we have X( d ε 1 ) Ḡε since d is integer, and hence ḡ( ) ε > 0, which contradicts that ḡ( ) = 0. Then we have d ε d. Hence by Lemma 4.3 we obtain d ε d ε d d ε + εδ s < d ε + 1. This completes the proof. We construct a polytope P satisfying X( d ε 1 ) P for some ε X\int Ḡε. Let v be a verte minimizing ḡ(v) over P V and ε = ḡ(v ). For any P we have ḡ() ḡ(v ), i.e., 0 ḡ() ḡ(v ) = ḡ() ε, and hence P Ḡε. This implies that X( d ε 1 ) Ḡε. Therefore if ε > 0 then the optimal value of (mmf ) is obtained by Theorem Local search For v X M X V, we define the set of efficient vertices linked to v by an edge as N M (v) = { v X M X V [v, v ] is an edge of X } (4.6) = { v X V [v, v ] is an edge of X and g(v ) 0 }. Whenever we find a feasible solution w X M, we apply the Local Search procedure starting with w (LS(w) for short) for further improvement. The procedure is described as follows. /** LS(w) procedure **/ 0 (initialization) If w X V then find the face F of X containing w in its relative interior and solve min{ d F } to obtain a verte v 0 X M X V, otherwise set v 0 := w. Set k := 0. 8
10 k (iteration k) Solve min{ dv v N M (v k ) } to obtain a solution v. If dv dv k then stop, v k is the local optimal verte of (mmf ). Otherwise set v k+1 := v, k := k+1 and go to k. Remark 4.7 If w X M, the face F of X containing w in its relative interior is contained in X M since X M is a connected union of several faces of X OA method for (mmf ) We describe the OA method for (mmf ) as follows. /** OA method for (mmf ) **/ 0 (initialization) Find an initial feasible verte w 0 X M X V of (mmf ). If N M (w 0 ) = then stop. (w 0 is a unique feasible solution of (mmf )). Otherwise, apply the LS(w 0 ) procedure to obtain a local optimal verte 0 X M X V. Solve ζ := ma{ e X, d d 0 1 } and construct an initial polytope P 0 X(d 0 1) by setting P 0 := { R n e ζ, d d 0 1, 0 }. Compute the verte set PV 0 of P 0. Set k := 0. k (iteration k) Solve min{ ḡ(v) v PV k } to obtain a verte vk. k1 (termination) If either d k = 0 or ḡ(v k ) > 0 then stop. (The optimal value of (mmf ) is d k ). Otherwise, obtain the point k ε (0, v k ) Ḡε. (Note that Lemma 4.2 ensures that (0, v k ) Ḡε ). k2 (update) If k ε X, obtain the point k (0, v k ] Ḡ. k2.1 If k X, meaning k X M, then obtain the local optimal verte z k X M X V by applying the LS( k ) procedure, and further obtain the point z k ε (0, z k ) Ḡε. Set k+1 := argmin{ dz k ε, d k ε } and P k+1 := P k { R n d d k+1 1 }. k2.2 If k X, meaning k X M, then set k+1 := k ε and P k+1 := P k { R n d d k+1 1, l() 0 }. k3 If k ε X then set k+1 := k and P k+1 := P k { R n l() 0 }. k4 Compute the verte set P k+1 V of P k+1. Set k := k + 1 and go to k. Remark 4.8 The function l : R n R in Step k2.2 and Step k3 is given by one of the inequalities ±A 0 and c not satisfied by the point v k, i.e., (i) l() = e j c j for some j {1,..., n} such that vj k > c j, or (ii) l() = sgn(a i v k )a i for some i {1,..., m} such that a i v k 0, where a i is the ith row of A, and { +1 when α > 0 sgn(α) = 1 when α < 0. Lemma 4.9 Let z k be the local optimal verte obtained by applying the LS( k ) procedure starting with k in Step k2.1 at iteration k, and suppose dz k > 0. Then dz k < dz k for iteration k such that k > k. Proof: It suffices to show that dz k+1 < dz k. By the construction of P k+1 we have P k+1 { d d k+1 1 }. Since k+1 (0, v k+1 ] P k+1 and z k+1 is obtained by LS( k+1 ), we have dz k+1 d k+1 d k
11 Since we assume that dz k > 0, we have 0 < dz k ε < dz k by the choice of z k ε. Therefore in Step k2.1 d k+1 1 < d k+1 = min{d k ε, dz k ε} < dz k. Combining the two inequalities yields the desired result. Theorem 4.10 The OA method for (mmf ) works correctly and terminates after finitely many iterations. Proof: (correctness) If N M (w 0 ) = at the initialization step, we can conclude from the connectedness of X M that w 0 is a unique feasible solution of (mmf ) and hence solves the problem. When the algorithm terminates in Step k1, the optimal value of (mmf ) is equal either to zero by Assumption 1.2 (iii), or to d k by Theorem 4.6. So the optimal value is obtained whenever the algorithm terminates. We suppose that the algorithm has not yet terminated at iteration k, i.e., d k > 0 and ḡ(v k ) 0, and show that each step of the algorithm can be done. Lemma 4.2 ensures that there are points k ε (0, v k ) Ḡε and z k ε (0, z k ) Ḡε, in Step k1 and Step k2.1, respectively. Since 0 int Ḡ and vk int Ḡ, there also eists a point k (0, v k ] Ḡ. When k ε X, clearly v k X, and hence the function l : R n R of Remark 4.8 can be found in Step k3. To show that the function l : R n R is found in Step k2.2 we have only to show that v k X. Suppose the contrary, i.e., v k X. By the assumption that ḡ(v k ) 0 and the fact that ḡ() 0 for all X, we have ḡ(v k ) = 0, i.e., v k Ḡ, and hence v k X\int Ḡ = X M. This implies k = v k X M by the choice of k, which contradicts that we are currently at iteration k2.2. Therefore we have seen that v k X in Step k2.2. (finiteness) Suppose that the polytope P ν at iteration ν meets the condition P ν X and P ν X M =, (4.7) after updated either in Step k2 or in Step k3, and consider the net iteration. Since v ν is chosen from P ν, we have v ν X\X M and consequently ḡ(v ν ) > 0. Then the algorithm stops at Step k1. Therefore we have only to prove that (4.7) holds within a finite number of iterations. Note first that both Step k2.2 and Step k3 are done at most a finite number of times. Indeed, the polytope, say P k, when 2m + n cuts l() 0 have been added to the initial polytope P 0, is contained in X. Therefore v k as well as k ε lies in X, and hence we obtain that k = v k X M in the same way as in the former part of this proof. Therefore we come to neither Step k2.2 nor Step k3 after iteration k. Namely, Step k2.1 followed by Step k4 repeats itself after iteration k. For iteration k with k k + 1, we have k X M. We then locate z k X M X V by applying the LS( k ) procedure and obtain a point z k ε (0, z k ) Ḡε. If dz k = 0 for some k k + 1 then we set k+1 := z k ε since dz k ε = dz k = 0 d k ε. Then the incumbent value d k+1 becomes zero, and hence the algorithm stops in Step k1 at the net iteration. If dz k > 0 for all k with k k + 1, we see that dz k+1 < dz k for all k k + 1 by Lemma 4.9. Since X M X V is finite, we eventually obtain a point z ν 1 X M X V such that dz ν 1 dz for all z X M X V. Also we have dz ν 1 ε < dz ν 1 by the choice of z ν 1 ε. The polytope P ν is then defined as P ν := P ν 1 { d d ν 1 }, where ν satisfies that d ν = min{ d ν 1 ε, dz ν 1 ε } < dz ν 1. This means that P ν (X M X V ) =. Since X M is a connected union of several faces of X, we see that dz ν 1 d for all X M. Therefore we conclude that P ν X M =. 10
12 We illustrate the OA method for (mmf ) in Figure 3, in which we use a two-dimensional general polyhedron X = { R 2 B b, 0 } for B R m 2 and b R m because the set of feasible flows X = { R n A = 0, 0 c } is unsuitable for illustration. We obtain a local optimal verte 0 X M X V and set up an initial polytope P 0 (See (a)). It is easy to enumerate all vertices of P 0 because this polytope is simply given by P 0 := { R n e ζ, d d 0 1, 0 }. We obtain a point v 0 minimizing ḡ(v) over PV 0, and a point 0 ε (0, v 0 ) Ḡε (See (b)). We see that 0 ε X, and hence set 1 := 0 and cut off v 0 from P 0 (See (c)). Using PV 0, we compute P V 1. In the net iteration, we obtain points v 1, 1 ε and 1. Since 1 X M, we apply the LS( 1 ) procedure to obtain a point z 1, and obtain a point z 1 ε (0, z 1 ) Ḡε (See (d)). We find a point z 1 ε X\int Ḡε such that dz 1 ε < d 1 ε. We then set 2 := z 1 ε and construct P 2 by adding the cut d d 2 1 to P 1 (See (e)). Because ḡ(v) > 0 for all vertices v of P 2 (See (f)), we terminate at the net iteration with the optimal value d Further Works The OA method provides the optimal value but may fail to provide an optimal solution of (mmf ). Finding an optimal solution is still a hard task even when its value is known, however, the following lemma affords a clue to the way of finding an optimal solution. Lemma 5.1 Let ε (0, 1), ε be an ε-optimal solution of (mmf ) and ε = { ξ R n Aξ = 0, ξ 0, eξ ε }. (5.1) If ε + ξ is an integer vector for some ξ ε then ε + ξ is an optimal solution of (mmf ). Proof: (feasibility) Let = ε + ξ and y be an optimal solution of ma{ ey y X, y }. Note that and also since X { y y } inherits the integrality property of X. Suppose we have the inequality e is integer, (5.2) e ε e ey, (5.3) ey is integer, (5.4) ey < e ε + 1. (5.5) Then by (5.3) and (5.5) together with the integrality of e and ey we see that e = ey. Hence ḡ( ) = ey e = 0, meaning that X M. The inequality (5.5) is seen as follows. Let y ε be an optimal solution of ma{ ey y X, y ε }, and let ξ = y ε ε. We see that Aξ = Ay ε A ε = 0, ξ 0 and eξ = e(y ε ε) = g( ε) ε, and hence ξ ε. Then ey ε = e( ε + ξ ) e ε + ε < e ε +1. The point y is a feasible solution of ma{ ey y X, y ε }, since y X and y = ε + ξ ε. Then we see that e(y ε y ) 0, and hence ey ey ε < e ε + 1. (optimality) We show that solves (mmf ). Clearly, d ξ e ξ since d e and ξ 0. For any v X M X V, we see that g(v) ε, and v is an integer vector by the integrality property of X. Since ε = ξ is an optimal solution of (mmf ε ), we have d ε d for 11
13 v 0 P 0 P 0 0 ε Ḡ ε Ḡ ε X 0 d X 0 d (a) polytope P 0 (b) obtaining v 0 and 0 ε v 0 { l() = 0 } LS( 1 ) 0 ε P 0 z 1 v 1 1 Ḡ ε Ḡ ε z 1 ε 1 ε X 1 d X P 1 1 d (c) cutting off v 0 from P 0 (d) obtaining v 1, 1 ε and z 1 ε { d = d 2 1 } v 1 v 3 ḡ(v 3 ) > 0 Ḡ ε 2 Ḡ ε P 2 2 X P 1 d X (e) cutting off v 1 from P 1 (f) termination Figure 3: An eample of the OA method for (mmf ) 12
14 all X such that g() ε, and hence d ε dv for all v X M X V. Then we see that d = d ε + d ξ dv + e ξ < dv + 1. Since both and v are integer vectors, we have d dv for all v X M X V. Since the dimension of X is n m, it would be desirable to reduce the number of variables that we have to handle in the algorithm. Yamamoto-Zenke eplains an idea in [34], however, with the proviso that it does not work generally. Computational eperiment should be carried out to improve the efficiency of the algorithm in this paper. Appendi Proof of Theorem 2.1 Proof: (i) The etended gap function ḡ() of (2.2) is given by the optimal value of ma ey e βt (P G ()) y,t s.t. Ay = 0, 0 y c, y + t, t 0, whose dual problem is min (D G ()) π,α,β s.t. αc β e (π, α, β) Ω, where Ω = { (π, α, β) R m+2n πa + α β e, α 0, 0 β β }. For any R n, (D G ()) is feasible, e.g., take π = β = 0 and α e, and has the finite optimal value. By the duality theorem of linear programming, for any R n, (P G ()) also has the finite optimal value, and hence ḡ() is finite for any R n. (ii) Let be a point in the domain of g. By the similar observation in (i), the gap function g() of (2.1) is given by the optimal value of min αc β e (D G ()) π,α,β s.t. (π, α, β) Ω, where Ω = { (π, α, β) R m+2n πa + α β e, α, β 0 }. If β is so large that every verte v of Ω satisfies v β then we have ḡ() = g() by the theory of linear programming. Replacing π by π 1 π 2 with π 1, π 2 0 and introducing a slack variable vector γ 0, Ω is rewritten as (π 1 ) (π 1 ) (π 1 ) (π 2 ) ( Ω = α A A I I I ) (π 2 ) α β β = 1, (π 2 ) α β 0. γ γ γ 13
15 Let v be a verte of Ω. Then it is a basic solution of the system defining Ω, i.e., v = (w B, w N ) = (B 1 1, 0) for some nonsingular n n submatri B of (A A I I I). Since the incidence matri A is totally unimodular, so is (A A I I I). Therefore the matri B 1 is composed of 1, 0 and +1, and hence B 1 1 n1. This completes the proof. Proof of Theorem 3.1 Proof: Suppose that D\intH is not an optimal solution of (CDC), i.e., there eists y D\intH such that py < p. Clearly, y D(p ) and h(y) 0. If h(y) > 0 then y is not contained in H, and hence y D(p )\H. By the regularity assumption, if h(y) = 0, i.e., y H then we can take y N δ (y) D such that py < p and h(y ) > 0 for a sufficiently small δ > 0, where N δ (y) = { y R n y y < δ }, and hence we see that y D(p )\H. Acknowledgement This research is supported in part by the Grant-in-Aid for Scientific Research (B)(2) of the Ministry of Education, Culture, Sports, Science and Technology of Japan. References [1] An, L. T. H., Tao, P. D. and Muu, L. D.: Numerical Solution for Optimization over the Efficient Set by D.C. Optimization Algorithms, Oper. Res. Lett., Vol. 19 (1996), [2] An, L. T. H., Tao, P. D. and Muu, L. D.: Simplicially-constrained DC optimization over the efficient and weakly sets, J. Optim. Theory Appl., Vol. 117 (2003), [3] An, L. T. H., Tao, P. D. and Thoai, N. V.: Combination between global and local methods for solving an optimization problem over the efficient set, European J. Oper. Res., Vol. 142 (2002), [4] Benson, H. P.: Optimization over the Efficient Set, J. Math. Anal. Appl., Vol. 98 (1984), [5] Benson, H. P.: An All-Linear Programming Relaation Algorithm for Optimizing over the Efficient Set, J. Global Optim., Vol. 1 (1991), [6] Benson, H. P.: A Finite, Nonadjacent Etreme-Point Search Algorithm for Optimization over the Efficient Set, J. Optim. Theory Appl., Vol. 73 (1992), [7] Bolintineanu, S.: Minimization of a Quasi-Concave Function over an Efficient Set, Math. Program., Vol. 61 (1993), [8] Chen, P. C., Hansen, P. and Jaumard, B.: On-Line and Off-Line Verte Enumeration by Adjacency Lists, Oper. Res. Lett., Vol. 10 (1991), [9] Chvátal, V.: Linear programming, W.H. Freeman and Campany, New York, [10] Dauer, J. P. and Fosnaugh, T. A.: Optimization over the Efficient Set, J. Global Optim., Vol. 7 (1995), [11] Ecker, J. G. and Song, J. H.: Optimizing a Linear Function over an Efficient Set, J. Optim. Theory Appl., Vol. 83 (1994), [12] Fukuda, K. and Prodon, A.: Double Description Method Revisited, in Combinatorics and Computer Science,
16 [13] Fülöp, J.: A Cutting Plane Algorithm for Linear Optimization over the Efficient Set, Lecture Notes 405, Economics and Mathematical Systems, Springer-Verlag, Heidelberg, [14] Garay, M. R. and Johnson, D. S.: Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco, [15] Gotoh, J., Thoai, N. V. and Yamamoto, Y.: Global Optimization Method for Solving the Minimum Maimal Flow Problem, Optim. Methods Softw., Vol. 18 (2003), [16] Horst, R. and Tuy, H.: Global Optimization, Springer-Verlag, Berlin, third and revised and enlarged edition, [17] Muu, L. D. and Luc, L. T.: Global optimization approach to optimization over the efficient set, Lecture Notes 452, Economics and Mathematical System, Springer-Verlag, Heidelberg, [18] Muu, L. D. and Shi, J.: D.C. Optimization Methods for Solving Minimum Maimal Network Flow Problem, Lecture notes 1349, captivation of conveity ; fascination of nonconveity, Research Institute for Mathematical Sciences, Kyoto University, [19] Padberg, M.: Linear programming, Algorithms and Combinatorics 12, Springer-Verlag, Heidelberg, [20] Philip, J.: Algorithms for the Vector Maimization Problem, Math. Program., Vol. 2 (1972), [21] Phong, T. Q. and Tuyen, H. Q.: Bisection Search Algorithm for Optimizing over the Efficient Set, Vietnam J. Math., Vol. 28 (2000), [22] Sawaragi, Y., Nakayama, H. and Tanino, T.: Theory of Multiobjective Optimization, Monographs and Tetbooks 176, Academic Press, Orlando, [23] Sayin, S.: Optimizing over the Efficient Set Using a Top-Down Search of Faces, Oper. Res., Vol. 48 (2000), [24] Shi, J. and Yamamoto, Y.: A global optimization method for minimum maimal flow problem, Acta Math. Vietnam., Vol. 22 (1997), [25] Shigeno, M., Takahashi, I. and Yamamoto, Y.: Minimum Maimal Flow Problem - An Optimization over the Efficient Set -, J. Global Optim., Vol. 25 (2003), [26] Steuer, R. E.: Multiple Criteria Optimization: Theory, Computation, and Application, John Wiley & Sons, New York, [27] Thach, P. T., Konno, H. and Yokota, D.: Dual Approach to Minimization on the Set of Pareto-Optimal Solutions, J. Optim. Theory Appl., Vol. 88 (1996), [28] Thoai, N. V.: Conical Algorithm in Global Optimization for Optimizing over Efficient Sets, J. Global Optim., Vol. 18 (2000), [29] Tuy, H.: D.C. Optimization: Theory, Methods and Algorithms, in Horst, R. and Pardalos, P. M. eds., Handbook of Global Optimization, Kluwer Academic Publishers, Netherlands, [30] Tuy, H.: Conve Analysis and Global Optimization, Kluwer, Dordrecht, [31] White, D. J.: Optimality and Efficiency, John Wiley & Sons, Chichester, [32] White, D. J.: The Maimization of a Function over the Efficient Set via a Penalty Function Approach, European J. Oper. Res., Vol. 94 (1996), [33] Yamamoto, Y.: Optimization over the Efficient Set: Overview, J. Global Optim., Vol. 22 (2002), [34] Yamamoto, Y. and Zenke, D.: Cut and Split Method for the Minimum Maimal Flow Problem, to appear in Pacific Journal of Optimization. 15
OUTER APPROXIMATION METHOD FOR THE MINIMUM MAXIMAL FLOW PROBLEM
Journal of the Operations Research Society of Japan 2007, Vol. 50, No. 1, 14-30 OUTER APPROXIMATION METHOD FOR THE MINIMUM MAXIMAL FLOW PROBLEM Yoshitsugu Yamamoto University of Tsukuba Daisuke Zenke Japan
More informationD.C. Optimization Methods for Solving Minimum Maximal Network Flow Problem
D.C. Optimization Methods for Solving Minimum Maximal Network Flow Problem Le Dung Muu (Hanoi Institute of Mathematics, Vietnam) (Jianming SHI) (Muroran Institute of Technology, Japan) Abstract We consider
More informationOn DC. optimization algorithms for solving minmax flow problems
On DC. optimization algorithms for solving minmax flow problems Le Dung Muu Institute of Mathematics, Hanoi Tel.: +84 4 37563474 Fax: +84 4 37564303 Email: ldmuu@math.ac.vn Le Quang Thuy Faculty of Applied
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More information1 Convexity, Convex Relaxations, and Global Optimization
1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global
More informationGETTING STARTED INITIALIZATION
GETTING STARTED INITIALIZATION 1. Introduction Linear programs come in many different forms. Traditionally, one develops the theory for a few special formats. These formats are equivalent to one another
More information9. Interpretations, Lifting, SOS and Moments
9-1 Interpretations, Lifting, SOS and Moments P. Parrilo and S. Lall, CDC 2003 2003.12.07.04 9. Interpretations, Lifting, SOS and Moments Polynomial nonnegativity Sum of squares (SOS) decomposition Eample
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationSCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES
Nonlinear Analysis Forum 12(1), pp. 119 124, 2007 SCALARIZATION APPROACHES FOR GENERALIZED VECTOR VARIATIONAL INEQUALITIES Zhi-bin Liu, Nan-jing Huang and Byung-Soo Lee Department of Applied Mathematics
More informationOn Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)
On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued
More informationECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,
ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize
More informationEquivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p
Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical
More informationResearch Article Existence and Duality of Generalized ε-vector Equilibrium Problems
Applied Mathematics Volume 2012, Article ID 674512, 13 pages doi:10.1155/2012/674512 Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems Hong-Yong Fu, Bin Dan, and Xiang-Yu
More informationWEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE
Fixed Point Theory, Volume 6, No. 1, 2005, 59-69 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE YASUNORI KIMURA Department
More informationInexact Solution of NLP Subproblems in MINLP
Ineact Solution of NLP Subproblems in MINLP M. Li L. N. Vicente April 4, 2011 Abstract In the contet of conve mied-integer nonlinear programming (MINLP, we investigate how the outer approimation method
More informationSECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS
Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,
More informationContraction Methods for Convex Optimization and Monotone Variational Inequalities No.16
XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of
More informationA Continuation Method for the Solution of Monotone Variational Inequality Problems
A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:
More informationPessimistic Bi-Level Optimization
Pessimistic Bi-Level Optimization Wolfram Wiesemann 1, Angelos Tsoukalas,2, Polyeni-Margarita Kleniati 3, and Berç Rustem 1 1 Department of Computing, Imperial College London, UK 2 Department of Mechanical
More informationLecture : Lovász Theta Body. Introduction to hierarchies.
Strong Relaations for Discrete Optimization Problems 20-27/05/6 Lecture : Lovász Theta Body. Introduction to hierarchies. Lecturer: Yuri Faenza Scribes: Yuri Faenza Recall: stable sets and perfect graphs
More informationDual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes. Satoru FUJISHIGE and Jens MASSBERG.
RIMS-1734 Dual Consistent Systems of Linear Inequalities and Cardinality Constrained Polytopes By Satoru FUJISHIGE and Jens MASSBERG December 2011 RESEARCH INSTITUTE FOR MATHEMATICAL SCIENCES KYOTO UNIVERSITY,
More informationUNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013
PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped
More informationLecture 7: Weak Duality
EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly
More informationRelationships between upper exhausters and the basic subdifferential in variational analysis
J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More information1.4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION
Essential Microeconomics -- 4 FOUNDATIONS OF CONSTRAINED OPTIMIZATION Fundamental Theorem of linear Programming 3 Non-linear optimization problems 6 Kuhn-Tucker necessary conditions Sufficient conditions
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationOptimal Sojourn Time Control within an Interval 1
Optimal Sojourn Time Control within an Interval Jianghai Hu and Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 97-77 {jianghai,sastry}@eecs.berkeley.edu
More informationLCPI method to find optimal solutions of nonlinear programming problems
ISSN 1 746-7233, Engl, UK World Journal of Modelling Simulation Vol. 14 2018) No. 1, pp. 50-57 LCPI method to find optimal solutions of nonlinear programming problems M.H. Noori Skari, M. Ghaznavi * Faculty
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationProblems for Chapter 3.
Problems for Chapter 3. Let A denote a nonempty set of reals. The complement of A, denoted by A, or A C is the set of all points not in A. We say that belongs to the interior of A, Int A, if there eists
More informationLagrangian Duality for Dummies
Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.
More informationConvergence of generalized entropy minimizers in sequences of convex problems
Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences
More informationSupplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form
More informationGearing optimization
Gearing optimization V.V. Lozin Abstract We consider an optimization problem that arises in machine-tool design. It deals with optimization of the structure of gearbox, which is normally represented by
More informationA note on fixed point sets in CAT(0) spaces
J. Math. Anal. Appl. 320 (2006) 983 987 www.elsevier.com/locate/jmaa Note A note on fied point sets in CAT(0) spaces P. Chaoha,1, A. Phon-on Department of Mathematics, Faculty of Science, Chulalongkorn
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationA Dual Variant of Benson s Outer Approximation Algorithm
A Dual Variant of Benson s Outer Approximation Algorithm Matthias Ehrgott Department of Engineering Science The University of Auckland, New Zealand email: m.ehrgott@auckland.ac.nz and Laboratoire d Informatique
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationLagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems
Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationarxiv: v2 [math.oc] 31 May 2010
A joint+marginal algorithm for polynomial optimization Jean B. Lasserre and Tung Phan Thanh arxiv:1003.4915v2 [math.oc] 31 May 2010 Abstract We present a new algorithm for solving a polynomial program
More informationSome Properties of Convex Functions
Filomat 31:10 2017), 3015 3021 https://doi.org/10.2298/fil1710015a Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Some Properties
More informationTORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS
TORIC WEAK FANO VARIETIES ASSOCIATED TO BUILDING SETS YUSUKE SUYAMA Abstract. We give a necessary and sufficient condition for the nonsingular projective toric variety associated to a building set to be
More informationDecision Science Letters
Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear
More informationOn mixed-integer sets with two integer variables
On mixed-integer sets with two integer variables Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Santanu S. Dey Georgia Inst. Tech. santanu.dey@isye.gatech.edu September 8, 2010 Oktay Günlük IBM Research
More informationUNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS )
UNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS ) CHAPTER : Limits and continuit of functions in R n. -. Sketch the following subsets of R. Sketch their boundar and the interior. Stud
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationGi en Demand for Several Goods
Gi en Demand for Several Goods Peter Norman Sørensen January 28, 2011 Abstract The utility maimizing consumer s demand function may simultaneously possess the Gi en property for any number of goods strictly
More informationLifting for conic mixed-integer programming
Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)
More informationComputing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization
Computing Efficient Solutions of Nonconvex Multi-Objective Problems via Scalarization REFAIL KASIMBEYLI Izmir University of Economics Department of Industrial Systems Engineering Sakarya Caddesi 156, 35330
More informationWeek 3: Faces of convex sets
Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal
More informationVector Variational Principles; ε-efficiency and Scalar Stationarity
Journal of Convex Analysis Volume 8 (2001), No. 1, 71 85 Vector Variational Principles; ε-efficiency and Scalar Stationarity S. Bolintinéanu (H. Bonnel) Université de La Réunion, Fac. des Sciences, IREMIA,
More informationOperations Research Letters
Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst
More informationResearch Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007
Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationOn a Polynomial Fractional Formulation for Independence Number of a Graph
On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas
More informationConvex Optimization Problems. Prof. Daniel P. Palomar
Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,
More informationStability of efficient solutions for semi-infinite vector optimization problems
Stability of efficient solutions for semi-infinite vector optimization problems Z. Y. Peng, J. T. Zhou February 6, 2016 Abstract This paper is devoted to the study of the stability of efficient solutions
More informationA New Fenchel Dual Problem in Vector Optimization
A New Fenchel Dual Problem in Vector Optimization Radu Ioan Boţ Anca Dumitru Gert Wanka Abstract We introduce a new Fenchel dual for vector optimization problems inspired by the form of the Fenchel dual
More informationTHE inverse tangent function is an elementary mathematical
A Sharp Double Inequality for the Inverse Tangent Function Gholamreza Alirezaei arxiv:307.983v [cs.it] 8 Jul 03 Abstract The inverse tangent function can be bounded by different inequalities, for eample
More informationPrimal-Dual Approach to Solve Linear Fractional Programming Problem
Journal of the Applied Mathematics, Statistics and Informatics (JAMSI), 4 (2008), No. Primal-Dual Approach to Solve Linear Fractional Programming Problem VISHWAS DEEP JOSHI, EKA SINGH AND NILAMA GUPA Abstract
More informationPart I Analysis in Economics
Part I Analysis in Economics D 1 1 (Function) A function f from a set A into a set B, denoted by f : A B, is a correspondence that assigns to each element A eactly one element y B We call y the image of
More informationA note on the definition of a linear bilevel programming solution
A note on the definition of a linear bilevel programg solution Charles Audet 1,2, Jean Haddad 2, Gilles Savard 1,2 Abstract An alternative definition of the linear bilevel programg problem BLP has recently
More informationAn upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables
An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October
More informationApproximate formulas for the Point-to-Ellipse and for the Point-to-Ellipsoid Distance Problem
Approimate formulas for the Point-to-Ellipse and for the Point-to-Ellipsoid Distance Problem ALEXEI UTESHEV St.Petersburg State University Department of Applied Mathematics Universitetskij pr. 35, 198504
More informationIntroduction to Linear and Combinatorial Optimization (ADM I)
Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and
More informationSystem Planning Lecture 7, F7: Optimization
System Planning 04 Lecture 7, F7: Optimization System Planning 04 Lecture 7, F7: Optimization Course goals Appendi A Content: Generally about optimization Formulate optimization problems Linear Programming
More informationEconomics 205 Exercises
Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the
More informationOn Penalty and Gap Function Methods for Bilevel Equilibrium Problems
On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of
More informationUsing quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints
Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 4, Issue 4, Article 67, 2003 ON GENERALIZED MONOTONE MULTIFUNCTIONS WITH APPLICATIONS TO OPTIMALITY CONDITIONS IN
More informationTopic 6: Projected Dynamical Systems
Topic 6: Projected Dynamical Systems John F. Smith Memorial Professor and Director Virtual Center for Supernetworks Isenberg School of Management University of Massachusetts Amherst, Massachusetts 01003
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationPart III. 10 Topological Space Basics. Topological Spaces
Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.
More informationOn the intersection of infinite matroids
On the intersection of infinite matroids Elad Aigner-Horev Johannes Carmesin Jan-Oliver Fröhlich University of Hamburg 9 July 2012 Abstract We show that the infinite matroid intersection conjecture of
More informationCHEBYSHEV INEQUALITIES AND SELF-DUAL CONES
CHEBYSHEV INEQUALITIES AND SELF-DUAL CONES ZDZISŁAW OTACHEL Dept. of Applied Mathematics and Computer Science University of Life Sciences in Lublin Akademicka 13, 20-950 Lublin, Poland EMail: zdzislaw.otachel@up.lublin.pl
More informationFractional and circular 1-defective colorings of outerplanar graphs
AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 6() (05), Pages Fractional and circular -defective colorings of outerplanar graphs Zuzana Farkasová Roman Soták Institute of Mathematics Faculty of Science,
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationApproximate inference, Sampling & Variational inference Fall Cours 9 November 25
Approimate inference, Sampling & Variational inference Fall 2015 Cours 9 November 25 Enseignant: Guillaume Obozinski Scribe: Basile Clément, Nathan de Lara 9.1 Approimate inference with MCMC 9.1.1 Gibbs
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationMinimum cost transportation problem
Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum
More informationLinear & Integer programming
ELL 894 Performance Evaluation on Communication Networks Standard form I Lecture 5 Linear & Integer programming subject to where b is a vector of length m c T A = b (decision variables) and c are vectors
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationConvex Optimization Overview (cnt d)
Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize
More informationConstructing efficient solutions structure of multiobjective linear programming
J. Math. Anal. Appl. 307 (2005) 504 523 www.elsevier.com/locate/jmaa Constructing efficient solutions structure of multiobjective linear programming Hong Yan a,, Quanling Wei b, Jun Wang b a Department
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationNetwork Flow Problems Luis Goddyn, Math 408
Network Flow Problems Luis Goddyn, Math 48 Let D = (V, A) be a directed graph, and let s, t V (D). For S V we write δ + (S) = {u A : u S, S} and δ (S) = {u A : u S, S} for the in-arcs and out-arcs of S
More informationarxiv: v1 [math.oc] 21 Mar 2015
Convex KKM maps, monotone operators and Minty variational inequalities arxiv:1503.06363v1 [math.oc] 21 Mar 2015 Marc Lassonde Université des Antilles, 97159 Pointe à Pitre, France E-mail: marc.lassonde@univ-ag.fr
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationThe Kuhn-Tucker and Envelope Theorems
The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the
More informationMathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions
Filomat 27:5 (2013), 899 908 DOI 10.2298/FIL1305899Y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Mathematical Programming Involving
More informationApproaches to Sensitivity Analysis in MOLP
I.J. Information echnology and Computer Science, 204, 03, 54-60 Published Online February 204 in MECS (http://www.mecs-press.org/) DOI: 0.585/ijitcs.204.03.07 Approaches to Sensitivity Analysis in MOLP
More informationOn the projection onto a finitely generated cone
Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More information