An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem

Size: px
Start display at page:

Download "An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem"

Transcription

1 An Algorithm and a Core Set Result for the Weighted Euclidean One-Center Problem Piyush Kumar Department of Computer Science, Florida State University, Tallahassee, FL , USA, piyush@cs.fsu.edu E. Alper Yıldırım Department of Industrial Engineering, Bilkent University, Bilkent, Ankara, Turkey, yildirim@bilkent.edu.tr Given A := {a 1,..., a m } R n with corresponding positive weights W := {ω 1,..., ω m }, the weighted Euclidean one-center problem, which is a generalization of the minimum enclosing ball problem, involves the computation of a point c A R n that minimizes the maximum weighted Euclidean distance from c A to each point in A. In this paper, given ɛ > 0, we propose and analyze an algorithm that computes a 1 + ɛ-approximate solution to the weighted Euclidean one-center problem. Our algorithm explicitly constructs a small subset X A, called an ɛ-core set of A, for which the optimal solution of the corresponding weighted Euclidean one-center problem is a close approximation to that of A. In addition, we establish that X depends only on ɛ and the ratio of the smallest and largest weights but is independent of the number of points m and the dimension n. This result subsumes and generalizes the previously known core set results for the minimum enclosing ball problem. Our algorithm computes a 1+ɛ-approximate solution to the weighted Euclidean one-center problem for A in Omn X arithmetic operations. Our computational results indicate that the size of the ɛ-core set computed by the algorithm is in general significantly smaller than the theoretical worst-case estimate, which contributes to the efficiency of the algorithm especially for large-scale instances. We shed some light on the possible reasons for this discrepancy between the theoretical estimate and the practical performance. Key words: Weighted Euclidean one-center problem, minimum enclosing balls, core sets, approximation algorithms. History: Submitted February Introduction Given a finite set of points A := {a 1,..., a m } R n with corresponding positive weights W := {ω 1,..., ω m }, the weighted Euclidean one-center problem is concerned with finding 1

2 the point c A R n that minimizes the maximum weighted Euclidean distance from c A to each point in A. Formally, it amounts to solving the following optimization problem: ρ A := min max ω i a i c.,...,m c R n The weighted Euclidean one-center problem reduces to the minimum enclosing ball or the Euclidean one-center problem when all the weights are identical. It follows that c A and ρ A are simply the center and the radius of the minimum enclosing ball of A, respectively, if all weights ω i are equal to one. Henceforth, we use A, W to denote an instance of this problem. The weights ω i can be viewed as a measure of importance of the input point a i. More precisely, input points with larger weights have a higher tendency to attract the optimal center towards themselves in comparison with points with smaller weights. such, the weighted Euclidean one-center problem has extensive applications in facility location Drezner and Gavish, Typically, the objective is to minimize the maximum weighted response time as in the examples of emergency services, health care, and fire fighting or to minimize the maximum weighted travel time as in the examples of post offices, warehouses, and schools. For c R n, let ρc := max,...,m ω i a i c. 1 Given ɛ > 0, we say that c, ρc R n R is a 1 + ɛ-approximate solution to the weighted Euclidean one-center problem for the instance A, W if ρ A ρc 1 + ɛρ A. 2 A subset X A is said to be an ɛ-core set or a core set of A if ρ X ρ A 1 + ɛρ X, 3 where c X, ρ X R n R denotes the optimal solution of the weighted Euclidean onecenter problem of the instance X, {ω j : a j X }. Since c A lies in the convex hull of A cf. Section 2, it follows that there always exists a 0-core set of size at most n + 1. Small core sets provide a compact representation of a given instance of an optimization problem. Furthermore, the existence of small core sets paves the way for the design of efficient algorithms especially for large-scale instances. Recently, several approximation algorithms 2 As

3 have been developed for various classes of geometric optimization problems based on the existence of small core sets Bădoiu et al., 2002; Kumar et al., 2003; Bădoiu and Clarkson, 2003; Tsang et al., 2005; Kumar and Yıldırım, 2005; Agarwal et al., 2005; Yıldırım, 2007; Todd and Yıldırım, Computational experience indicates that such algorithms are especially well-suited for large-scale instances for which a moderately small accuracy e.g., ɛ = 10 3 suffices. The weighted Euclidean one-center problem and its variants have been the center of study of many papers Francis, 1967; Megiddo, 1983; Chandrasekaran, 1982; Hansen et al., 1985; Drezner and Gavish, 1985; Megiddo, In particular, the problem can be solved in time proportional to the number of points for fixed dimension n = O1 Megiddo, However, the dependence on the dimension is exponential. For the case when the dimension is not fixed, Drezner and Gavish proposed a variant of the ellipsoid method that computes a 1 + ɛ-approximate solution in On 3 m log1/ɛ arithmetic operations Drezner and Gavish, Incidentally, this asymptotic complexity bound matches with that arising from the application of the ellipsoid method to approximately solve the problem Grötschel et al., Since the problem can be formulated as an instance of second-order cone programming, interior-point methods can be applied to compute a 1 + ɛ-approximate solution in polynomial time. However, the cost per iteration becomes prohibitively high as the size of the problem instance increases. We refer the reader to the computational results reported in Zhou et al., 2005 for the special case of the minimum enclosing ball problem. In this paper, we focus on computing a 1 + ɛ-approximate solution for large-scale instances of the weighted Euclidean one-center problem. Our algorithm explicitly constructs an ɛ-core set X of A such that X = O1/τɛ, where τ is the squared ratio of the minimum weight to the maximum weight. The asymptotic bound on the core set size reduces to O1/ɛ for the special case of the minimum enclosing ball problem, which matches the previously known core set results Bădoiu and Clarkson, 2003; Kumar et al., 2003; Yıldırım, It has also been shown that this bound is worst-case optimal Bădoiu and Clarkson, We establish that our algorithm computes a 1 + ɛ-approximate solution in Omn X arithmetic operations. Our extensive computational results indicate that the practical performance of our algorithm is usually much better than that predicted by the worst-case theoretical estimate. We provide some insights into the reasons of this discrepancy between the theoretical estimate and the practical performance. 3

4 This paper is organized as follows. In the remainder of this section, we define our notation. In Section 2, we discuss optimization formulations for the weighted Euclidean onecenter problem. Section 3 is devoted to the presentation and the analysis of our algorithm. We also compare our results to other related results in the literature in this section. The computational results are presented in Section 4. Finally, Section 5 concludes the paper Notation Vectors are denoted by lower-case Roman letters. For a vector p, p i denotes its ith component. Inequalities on vectors apply to each component. We reserve e j for the jth unit vector, e for the vector of all ones, and I for the identity matrix in the appropriate dimensions, which will always be clear from the context. Upper-case Roman letters are reserved for matrices and M ij denotes the i, j component of the matrix M. We use log to denote the natural logarithm. Functions and operators are denoted by upper-case Greek letters. Scalars except for m and n are represented by lower-case Greek letters unless they represent components of a vector or elements of a sequence of scalars, vectors, or matrices. We reserve i, j, and k for such indexing purposes. Upper-case script letters are used for all other objects such as sets and balls. 2. Optimization Formulations The weighted Euclidean one-center problem for the instance A, W admits the following formulation as an optimization problem: P 1 min c,ρ ρ subject to ω i a i c ρ, i = 1,..., m, where c R n and ρ R are the decision variables. By squaring the constraints and defining γ := ρ 2, P 1 can be converted into the following optimization problem with smooth, convex quadratic constraints: P 2 min c,γ γ subject to ν i a i T a i 2a i T c + c T c γ, i = 1,..., m, where ν i := ω 2 i, i = 1,..., m. 4 4

5 The Lagrangian dual of P 2 is given by D max u Φu := subject to m m T u i ν i a i T a i 1 m m u u i ν i a i u i ν i a i iν i m u i = 1, u i 0, i = 1,..., m, where u R m is the decision variable. It is easy to verify that D reduces to the dual formulation of the minimum enclosing ball problem if all the weights are identical Yıldırım, In contrast with the minimum enclosing ball problem, the objective function of D is no longer quadratic for the general weighted problem. We discuss the implications of this observation in further detail in Section 3.3. By the Karush-Kuhn-Tucker optimality conditions, c A, γ A R n R is an optimal solution of P 2 if and only if there exists u R m such that u i m u i = 1, 5a c A = 1 m u i ν i m u i ν i a i, 5b ν i a i T a i 2a i T c A + c A T c A γa, i = 1,..., m, 5c [ ] νi a i T a i 2a i T c A + c A T c A γa = 0, i = 1,..., m, 5d A simple manipulation of the optimality conditions reveals that u 0. 5e γ A = Φu, 6 which implies that u R m is an optimal solution of D and that strong duality holds between P 2 and D. Note that the weighted center c A of A is given by a convex combination of the points in A by 5b. The existence of the weighted Euclidean one-center of A directly follows from the maximization of a continuous function over a compact domain in the dual formulation. It is also straightforward to establish the uniqueness by the following simple contradiction argument: If there were two such weighted centers, one could improve the solution by considering an appropriate convex combination of these two centers. 5

6 It follows from the optimality conditions that the solution of the weighted Euclidean one-center problem can be obtained by solving the dual problem D. If u R m denotes an optimal solution of D, the optimal solution c A, ρ A of P 1 is given by c A = 1 m u i ν i 3. The Algorithm m u i ν i a i, ρ A = γ A 1/2 = Φu 1/2. 7 In this section, given an input set A := {a 1,..., a m } R n with corresponding positive weights W = {ω 1,..., ω m } and ɛ > 0, we present an algorithm that computes a 1 + ɛ- approximate solution to the weighted Euclidean one-center problem by approximately solving the dual problem D. In the next subsection, we describe a procedure to compute an initial feasible solution of D whose objective function value provides a good approximation of the optimal value. The main algorithm is presented and analyzed in the following subsection. We compare our results with the other relevant results in the literature in the last subsection Initial Feasible Solution As observed in Megiddo, 1983, the weighted Euclidean one-center problem has the following geometric interpretation: Given ρ > 0, consider the balls defined by } B i ρ = {x R n : x a i ρωi, i = 1,..., m. Let c A, ρ A denote the optimal solution of P 1. Then, ρ A is the smallest value of ρ such that the balls B i ρ have a nonempty intersection and c A is the unique point in the intersection of the balls B i ρ A. Motivated by this geometric interpretation, let a j A be the point with the largest corresponding weight ω j. We now construct balls B i ρ for increasing values of ρ > 0. For each i = 1,..., m, i j, there exists a unique value ρ i > 0 such that the balls B i ρ and B j ρ intersect for the first time when ρ = ρ i. Let ρ := max,...,m, i j ρ i > 0. It follows from the geometric interpretation above that ρ ρ A. It turns out that ρ is a provably good approximation to ρ A. We describe the procedure more formally in Algorithm

7 Algorithm 3.1 The algorithm that computes an initial feasible solution of D. Require: Input set of points A = {a 1,..., a m } R n, W = {ω 1,..., ω m }. 1: j arg max,...,m ω i ; 2: for all i such that 1 i m, i j do 3: ρ i a i a j /1/ω i + 1/ω j ; 4: end for 5: ρ max,...,m, i j ρ i ; j arg max,...,m, i j ρ i ; 6: u 0 0; u 0 j ω j /ω j + ω j ; u 0 j ω j /ω j + ω j ; 7: Output: u 0, a j, a j. Lemma 3.1 Algorithm 3.1 computes a feasible solution u 0 R m of D in Omn arithmetic operations such that Φu 0 Φu = γ A 9Φu 0. 8 Proof. Clearly, Algorithm 3.1 terminates in Omn operations. Note that the first inequality in 8 simply follows from the fact that u 0 R m is a feasible solution of the maximization problem D. It is easy to verify that for all y, z R m and σ R. 1 σy + σz 2 = 1 σ y 2 + σ z 2 σ1 σ y z 2 9 Let us define σ = u 0 j ν j /u 0 jν j + u 0 j ν j. In order to prove the second inequality in 8, we have Φu 0 = u 0 jν j a j 2 + u 0 j ν j a j 2 u0 jν j a j + u 0 j ν j a j T u 0 jν j a j + u 0 j ν j a j u 0 j ν j + u 0 j ν j = u 0 jν j a j 2 + u 0 j ν j a j 2 u 0 jν j + u 0 j ν j 1 σa j + σa j 2, = u 0 jν j a j 2 + u 0 j ν j a j 2 u 0 jν j a j 2 u 0 j ν j a j 2 + u0 jν j u 0 j ν j u 0 j ν j + u 0 j ν j a j a j 2, = u 0 jν j u 0 j ν j u 0 j ν j + u 0 j ν j a j a j 2, = aj a j 2 2, 1 ω j + 1 ω j = ρ 2, where we used 9 in the third line and 4 in the next-to-last one. For each i = 1,..., m, i j, ρ i is the optimal value of the weighted Euclidean one-center problem for the instance {a i, a j }, {ω i, ω j }. Let c 0 R n denote the optimal weighted center 7

8 of the instance {a j, a j }, {ω j, ω j }. It is easy to verify that c 0 = αa j + 1 αa j, where α := ω j /ω j + ω j. For any i = 1,..., m, we have c 0 a i c 0 a j + a j a i, = ρ 1 + ρ i + 1, ω j ω j ω i 2 ρ + 1, ω j ω i 3 ρ, ω i where we used the inequalities ρ i ρ and ω i ω j in the third line and the last line, respectively. It follows then that ω i c 0 a i 3ρ, i = 1,..., m. This implies that c, ρ := c 0, 3ρ is a feasible solution of P 1 and the second inequality in 8 immediately follows. It follows from Lemma 3.1 that Algorithm 3.1 is a simple 3-approximation algorithm for the weighted Euclidean one-center problem. Drezner and Gavish propose a very similar algorithm and establish that a j, ρa j is a 2-approximate solution Drezner and Gavish, 1985, Theorem 1, where ρ is defined as in 2 and j is the index of the point in A with the maximum weight. In the context of the dual problem D, the feasible solution produced by their algorithm is given by u 0 = e j. Since Φu 0 = 0, the objective function value of this initial feasible solution cannot be used to obtain an upper bound on the optimal value Φu of D such as that given by Lemma The Algorithm Algorithm 3.2 describes the main algorithm that computes a 1 + ɛ-approximate solution to the weighted Euclidean one-center problem for the instance A, W. We now explain Algorithm 3.2 in more detail. The algorithm is initialized by calling Algorithm 3.1 that computes an initial feasible solution u 0 R m of the dual formulation D. At each iteration, Algorithm 3.2 maintains a dual feasible solution u k R m and computes a trial solution c k, γ k 1/2 = c k, Φu k 1/2. By 7, this solution coincides with the optimal solution c A, ρ A if and only if u k is an optimal solution of D. Otherwise, by dual feasibility of u k, we have γ k 1/2 < ρ A. 8

9 Algorithm 3.2 The algorithm that computes a 1+ɛ-approximate solution to the weighted Euclidean one-center of A, W. Require: Input set of points A = {a 1,..., a m } R n, W = {ω 1,..., ω m }, ɛ > 0. 1: Run Algorithm 3.1 to compute u 0 R m, a j, a j. 2: X 0 {a j, a j }; ν i ω i 2, i = 1,..., m; 3: c 0 1/ m u0 i ν i m u0 i ν i a i ; k 0; 4: repeat 5: γ k Φu k ; 6: k arg max,...,m ν i a i c k 2 ; k arg min i:u k i >0 ν i a i c k 2 ; 7: δ + k ν k a k c k 2 /γ k 1, δ k 1 ν k a k c k 2 /γ ; k 8: δ k max{δ + k, δ k }; 9: if δ k 1 + ɛ 2 1 then break 10: if δ k > δ k then 11: µ k m uk i ν i /ν k ; µ k µk δ k 1 µ k 1+µ k δ k 1 if µ k < 1 12: λ k δ k /[21 + δ k ] if µ k = 1 µ k 1 1 µk 1δ k µ k 1 1+µ k δ k if µ k > 1 13: u k+1 1 λ k u k + λ k e k ; 14: c k+1 [1/1 λ k µ k + λ k ]1 λ k µ k c k + λ k a k ; 15: X k+1 X k {a k }; 16: else 17: µ k m uk i ν i /ν k ; + if δ k = 1 µ 1 k 1 1 µ δ k k 1 µ k δ k if µ k < 1 18: λ k δ k /[21 δ k ] if µ k = 1 µ k 1 + µk 1δ k µ 1 1 µ k δ k 1 if µ k > 1 and µ k δ k < 1 + if µ k > 1 and µ k δ k 1 19: λ k min{λ k, u k k /1 u k k }; 20: if λ k = u k k /1 u k k then 21: X k+1 X k \{a k }; 22: else 23: X k+1 X k ; 24: end if 25: u k λ k u k λ k e k ; 26: c k+1 [1/1 + λ k µ k λ k ]1 + λ k µ k c k λ k a k ; 27: end if 28: k k + 1; 29: until δ k ɛ : Output c k, X k, u k, [1 + δ k γ k ] 1/2.

10 At each iteration, Algorithm 3.2 computes two parameters δ + k and δ k. Note that δ+ k is the smallest value of δ such that c, γ 1/2 = c k, [1 + δγ k ] 1/2 is a feasible solution of the primal formulation P 1. Similarly, δ k is the largest value of δ such that ω i a i c k [1 δγ k ] 1/2 for all a i X k. Since δ k := max{δ + k, δ k } δ+ k, it follows that γ k 1/2 ρ A 1 + δ k 1/2 γ k 1/2. 10 Following Todd and Yıldırım, 2007, iteration k is called a plus-iteration if δ k > δ k. It is called a minus-iteration if δ k δ k and λk > u k /1 u k. Otherwise, we call it a drop-iteration since X k+1 is then obtained by removing a k from X k. At a plus-iteration, the next feasible solution u k+1 R m is given by an appropriate convex combination of u k and e k. The weights used in the convex combination are determined by λ k = arg max Φ 1 λu k + λe k. 11 λ [0,1] Note that u k+1 = 1 λ k u k + λ k e k is a feasible solution of D and the algorithm computes the new trial solution c k+1, γ k+1 1/2 as a function of u k+1. It turns out that c k+1 is obtained by moving c k towards a k A in this case. At a minus- or drop-iteration, the next feasible solution u k+1 is obtained by moving u k away from e k. In this case, λ k is given by λ k = arg max Φ 1 + λu k λe k. 12 λ [0,u k k /1 uk k ] Note that the range of λ is chosen to ensure the nonnegativity of u k+1. In contrast with a plus-iteration, c k+1 is obtained by moving c k away from a k A at a minus- or drop-iteration. Algorithm 3.2 is the adaptation of the Frank-Wolfe algorithm Frank and Wolfe, 1956 using Wolfe s away steps Wolfe, 1970 to the weighted Euclidean one-center problem using the initialization procedure given by Algorithm 3.1. This algorithm is a sequential linear programming algorithm for the dual problem D and generates a sequence of feasible solutions with nondecreasing objective function values. At each iteration, the nonlinear objective function Φu is linearized at the current feasible solution u k. At a plus-iteration, the new feasible solution u k+1 is obtained by moving towards the vertex of the unit simplex which maximizes this linear approximation. At a minus- or drop-iteration, u k+1 is obtained by moving away from the vertex that minimizes the linear approximation, where the minimization is restricted to the smallest face of the unit simplex that includes u k. In 10

11 either case, the parameter λ k is chosen so as to ensure the maximum improvement in the original objective function Φu. We remark that Algorithm 3.2 reduces to Algorithm 4.1 of Yıldırım, 2007 if all weights ω i are identical. Furthermore, µ k is always equal to one in this case, which implies that the optimal solution λ k of each of the line search problems 11 and 12 has a much simpler expression. In the presence of nonidentical weights, it turns out that the expression for λ k depends on the value of µ k at each iteration Analysis of the Algorithm We analyze Algorithm 3.2 in this section. improvement at each plus- or minus-iteration. First, we establish lower bounds on the Lemma 3.2 At each plus- or minus-iteration, we have γ k+1 γ k Proof. By definition of c k, 1 + µk δ k 2 41+δ k, if µk < 1, 1 + δ k 2 41+δ k, otherwise. 13 γ k = Φu k = m u k i ν i a i 2 m u k i ν i c k Let us first consider a plus-iteration. In this case, u k+1 = 1 λ k u k +λ k e k, where a k A is the point with the largest weighted distance from c k. Furthermore, c k+1 = 1 σc k + σa k, where σ = λ k /1 λ k µ k + λ k. Therefore, γ k+1 = Φ1 λ k u k + λ k e k, m = 1 λ k u k i ν i a i 2 + λ k ν k a k 2 [ = 1 λ k 1 λ k m m [1 u k i ν i + λ k ν k ] σ c k 2 + σ a k 2 σ1 σ a k c k 2], u k i ν i a i 2 1 σν k λk c k 2 + ν k λ k 1 σ a k c k 2, σ 11

12 [ m = 1 λ k u k i ν i a i 2 = γ k 1 λ k ] m u k i ν i c k 2 + λ k 1 σ1 + δ k γ k, [ ] 1 + λk µ k 1 + δ k, 1 λ k µ k + λ k where we used 9 for the computation of c k+1 2 in the second equality, the definitions of µ k and σ in the third one, and the definitions of µ k and δ k in the fourth one. It follows that γ k+1 = γ k + k λk, where + k λ := 1 λ 1 + λ1 + δ k. 1 λ + λ/µ k It is straightforward to verify that the first and second derivatives of + k are given by with respect to λ + k λ = λ µ k δ k µ k 1 2λµ k 1 + µ k δ k + µ k 2 δ k / λµ k 1 µ k 2, + k λ = 21 + δ k µ k 2 / λµ k 1 µ k 3, which together imply that + k λ is a strictly concave function on λ [0, 1] for each µk > 0 and that λ k 0, 1 is its unique maximizer. The proof is based on establishing a lower bound on + k λk. Suppose first that µ k < 1. In this case, we have λ k = µ k µk δ k 1 µ k 1 + µ k δ k 1, = µ k δ k ϑ µ k δ k, λ k := µk δ k 21 + δ k, where we used the mean value theorem on the function 1 + x to derive the second equality with ϑ 1 0, [1 µ k δ k ]/1 + µ k δ k and we used the upper bound on ϑ 1 and the fact that µ k < 1 to arrive at the last inequality. Since λ k is the maximizer of + k λ, it follows that + k λk + k λk = 1 + µk δ k δ k µk δ k δ k 21 + δ k + 1 µ k δ k 41 + δ k, 12

13 where we used µ k < 1 to derive the last inequality. This establishes the first part of 13 at a plus-iteration. Suppose now that µ k = 1. Since λ k = δ k /[21 + δ k ], we have + k λk = 1 + δ k δ k. Finally, if µ k > 1 at a plus-iteration, then we have λ k = µk µ k µk 1δ k 1 + µ k δ k δ k 21/µ k + δ k δ k 21 + δ k, where we used the the inequality 1 x 1 1/2x for x 1 and the fact that µ k > 1. The second part of the inequality 13 follows from the previous case since 1 λ k + λ k /µ k < 1, which completes the proof for a plus-iteration. Let us now consider a minus-iteration. In this case, u k+1 = 1 + λ k u k λ k e k, where a k X k is the point with the smallest weighted distance from c k. Similarly to a plusiteration, we obtain γ k+1 = Φ1 + λ k u k λ k e k = γ k k λk, where k λ := 1 + λ 1 λ1 δ k. 1 + λ λ/µ k Note that δ k 0, 1] at a minus-iteration. The first and second derivatives of k by are given k λ = λ 2 µ k 1µ k δ k 1 + 2λµ k µ k δ k 1 + µ k 2 δ k / µ k + µ k 1λ 2, k λ = 2δ k 1µ k 2 / µ k + µ k 1λ 3. If δ k = 1, then k λ + as λ +. Similarly, if δ k < 1 and µ k δ k 1, then k λ is a strictly increasing function on λ 0. Therefore, Algorithm 3.2 sets λ k = + in either one of these two cases, which subsequently leads to a drop-iteration. Suppose first that µ k < 1. In this case, k λ is a strictly concave function on λ [0, µ k /1 µ k since δ k 0, 1 at a minus-iteration. The unique maximizer λ k is given by λ k = µk µk δ k λ k µ k δ k 1 µ k 1 µ k := δ k 21 µ k δ k, where we again used the inequality 1 x 1 1/2x for x 1. Therefore, k λk k λk = 1 + µ k δ k 2 22 δ k µ k µk δ k δ k. 13

14 This establishes the first part of 13 at a minus-iteration. Suppose now that µ k = 1. Since λ k = δ k /[21 δ k ] at a minus-iteration, we have k λk = 1 + δ k 2 41 δ k 1 + δ k δ k. Finally, if µ k > 1 at a minus-iteration, note that we should necessarily have µ k δ k < 1. In this case, k λ is a strictly concave function on λ 0 and the unique maximizer λk is given by λ k = = µ k 1 + µk 1δ k 1, µ k 1 1 µ k δ k µ k δ k ϑ 2 1 µ k δ k, δ k 21 δ k, where we once again invoked the mean value theorem with ϑ 2 0, [µ k 1δ k ]/[1 µ k δ k ] to derive the second equality and we used the upper bound on ϑ 2 and the fact that µ k > 1 to obtain the inequality. The second part of the inequality 13 follows from the previous case since 1 + λ k λ k /µ k > 1, which completes the proof. Note that Lemma 3.2 establishes lower bounds on the improvement at each plus- or minus-iteration. On the other hand, no such lower bound can be derived for drop-iterations since λ k can be arbitrarily small. Therefore, we can only say that the dual objective function value does not decrease at a drop-iteration. We remark that the lower bounds on the improvement at each plus- or minus-iteration depend on µ k. The following result is an immediate consequence of Lemma 3.2. Corollary 3.1 Let µ := min{1, min k=0,1,... µ k } > 0. Then, at each plus- or minus-iteration, γ k+1 γ k 1 + µ δ k δ k. 15 We next analyze the complexity of Algorithm 3.2. For δ > 0, let us define the following parameter: θδ := min{k : δ k δ}. 16 Also, we denote the number of drop-iterations in the first θδ iterations of Algorithm 3.2 by φδ. 14

15 Lemma 3.3 θ and φ satisfy the following relationships: φ1 = 0, 17a θ1 = O1/µ, 17b θ1/2 i θ1/2 i 1 = O2 i /µ + φ1/2 i φ1/2 i 1, i = 1, 2,... 17c Proof. Note that Algorithm 3.2 cannot have any minus- or drop-iterations until δ k 1, which implies that φ1 = 0. Therefore, at each plus-iteration k with δ k > 1, it follows from Corollary 3.1 that γ k+1 γ k 1 + µ δ k δ k 1 + µ 8, where we used the fact that x 2 /1 + x is an increasing function on x 0. Iterating the inequality above and using the fact that 9γ 0 γ A γ 0 cf. Lemma 3.1, we obtain γ A γ k+1 [1 + µ /8] k+1 γ 0 [1 + µ /8] k+1 γ A /9, which implies that θ1 = O = O1/µ, where we used the inequality log1 + log 9 log 1+µ /8 x x/x + 1 for all x > 1. This establishes 17b. Let i be any positive integer and let k := θ1/2 i 1. At each plus- or minus-iteration with δ k > 1/2 i, it follows from Corollary 3.1 that γ k+1 γ k 1 + µ δ k δ k γ k 1 + µ. 2 i+2 2 i + 1 At a drop-iteration, we only have γ k+1 γ k. Therefore, let φ i := φ1/2 i φ1/2 i 1 denote the number of drop-iterations between iteration number θ1/2 i 1 and iteration number θ1/2 i of Algorithm 3.2. Therefore, iterating the above inequality and using the fact that [1 + 1/2 i 1 ]γ k γ A γ k cf. 10, we can bound the number of plus- or minus-iterations ζ between iteration θ1/2 i and iteration θ1/2 i 1 using γ A γ k+ζ+φ i µ γ k ζ i+2 2 i + 1 which implies that ζ + φ i log log i i 1 2 i 1 µ 2 2+i 2 i +1 γ A 1 + 1/2 i 1 + φ i, µ 2 2+i 2 i = O2 i /µ + φ i, µ 2 2+i 2 i +1 + φ i, i 2 i + 1 µ φ i, µ ζ, 2 2+i 2 i + 1

16 where we used the inequalities log1 + x x and log1 + x x/x + 1. This implies that θ1/2 i θ1/2 i 1 = O2 i /µ + φ1/2 i φ1/2 i 1, which completes the proof. We are now in a position to establish the iteration complexity of Algorithm 3.2. Lemma 3.4 Let ɛ 0, 1. Then, Algorithm 3.2 computes a 1 + ɛ-approximate solution in θɛ = O1/µ ɛ iterations. Proof. Let i be a positive integer such that 1/2 i ɛ < 1/2 i 1. Therefore, θɛ θ1/2 i. By Proposition 3.3, θ1/2 i = θ1 + i = O1/µ + θ1/2 i θ1/2 i 1, i O2 i /µ + φ1/2 i φ1/2 i 1, = O1/µ ɛ + φ1/2 i, where we used the fact that 2 i < 2/ɛ. The proof will be complete if we can establish that φ1/2 i = O1/µ ɛ. Note that we cannot bound the improvement from below at a drop-iteration. However, each such iteration can be coupled with the latest previous plus-iteration in which the component of u that just dropped to zero is increased from zero. In order to account for the two initial positive components of u 0, we may have to increase the iteration count by two. It follows that φ1/2 i = O1/µ ɛ. The following theorem establishes the overall complexity of Algorithm 3.2. Theorem 3.1 Given A := {a 1,..., a m } R n with corresponding weights W = {ω 1,..., ω m } and ɛ 0, 1, Algorithm 3.2 computes a 1+ɛ-approximate solution for the instance A, W of the weighted Euclidean one-center problem in Omn/µ ɛ arithmetic operations. Proof. Let u η denote the final iterate computed by Algorithm 3.2 and let γ η = Φu η. By 10, γ η γ A 1 + δ η γ η. Since δ η 1 + ɛ 2 1 by the termination criterion, it follows that γ η 1/2 ρ A [1 + δ η γ η ] 1/2 1 + ɛγ η 1/2, which implies that c η, ρc η = c η, [1 + δ η γ η ] 1/2 is a 1 + ɛ- approximate solution. 16

17 At each iteration, the dominating work is the computation of the largest weighted distance from the current center, which can be performed in Omn operations. The initial constant factor approximation can also be computed in Omn operations. Therefore, Algorithm 3.2 terminates in Omn/µ ɛ operations. Next, we establish that Algorithm 3.2 computes an ɛ-core set upon termination. Theorem 3.2 Let ɛ 0, 1 and let u η denote the final iterate computed by Algorithm 3.2. Then, X η A is an ɛ-core set of A. Furthermore, X η = O1/µ ɛ. Proof. We first prove the second statement. Note that X 0 is initialized with two elements and each iteration adds at most one element to X k. Lemma 3.4. Therefore, X η = O1/µ ɛ by Note that u η is a feasible solution of the dual formulation of the instance X η, W η, where W η := {ω j : a j X η }. Therefore, where γ Xη γ η γ Xη γ A 1 + δ η γ η 1 + ɛ 2 γ η, denotes the optimal value of the dual formulation corresponding to the instance X η, W η. It follows that ρ Xη ρ A 1 + ɛρ Xη, where ρ Xη = γ Xη 1/2, which implies that X η is an ɛ-core set of A. Note that each of the previous results depends on the parameter µ, which can be determined only upon the termination of Algorithm 3.2. priori bounded below by However, this parameter can a τ = min,...,m ν i max,...,m ν i, 18 where ν i is defined as in 4, since each µ k is the ratio of a convex combination of the ν i to some ν j. Therefore, each of the results established in Theorem 3.1 and Theorem 3.2 holds true if µ is replaced by τ. This implies that Algorithm 3.2 terminates in Omn/τɛ arithmetic operations and computes an ɛ-core set of size O1/τɛ for ɛ 0, 1. We remark that the overall complexity of Algorithm 3.2 and the asymptotic core set size reduce to Omn/ɛ and O1/ɛ, respectively, for the special case of the minimum enclosing ball problem since τ = 1. These results match the currently best known bounds for the minimum enclosing ball problem Yıldırım, 2007; Bădoiu and Clarkson,

18 3.3. Relation to Other Core Set Results Recently, Clarkson studied the properties of several variants of the Frank-Wolfe algorithm for general concave maximization problems over the unit simplex, of which the dual formulation of the weighted Euclidean one-center problem is a special case Clarkson, In particular, he proposed a general definition of an additive core set based on an additive error on the optimal value as opposed to the multiplicative one see 3 adopted in our setting. derived upper bounds on the size of an additive core set for the general problem. established that his definition of an additive core set almost coincides with the usual definition of a multiplicative core set in the special case of the dual formulation of the minimum enclosing ball problem. As such, his results imply the known bound of O1/ɛ on the size of an ɛ-core set for this problem. In this subsection, we discuss the relations between his bound on the size of an additive core set and our bound on the size of a multiplicative one. In particular, we establish that Clarkson s additive core set result can be transformed into a multiplicative core set result for the weighted Euclidean one-center problem. However, it turns out that these implied bounds are not asymptotically better than our bounds The Nonlinearity Measure C Consider the following optimization problem: max u He He Ψu s.t. u S, 19 where Ψ : R m R is a twice differentiable concave function and S := {u R m : e T u = 1, u 0} is the unit simplex. Clearly, this class of problems includes the dual optimization problem D. Using the Frank-Wolfe algorithm and some of its variants, Clarkson established that, for any δ > 0, one can compute a feasible solution u S such that Ψu Ψu δ, where u S is an optimal solution of 19, in at most OCΨ/δ iterations. Since his initial solution has only one nonzero component, u has at most OCΨ/δ positive components due to the nature of add-iterations in the Frank-Wolfe algorithm. Here CΨ is a measure of nonlinearity of the objective function Ψ and is defined as CΨ := sup u,z S, y=u+αu z S 1 α 2 Ψu + y x T Ψu Ψy

19 Essentially, CΨ is an upper bound on the scaled difference between the function Ψ and the linear approximation to Ψ measured over all feasible solutions. For instance, CΨ = 0 for a linear function Ψ. Therefore, CΨ can be viewed as a measure of flatness of Ψ Clarkson, Clarkson s upper bound on the size of the additive core set is useful if CΨ can be bounded above for a given function Ψ. For instance, Clarkson showed that an upper bound on CΨ can be easily derived if Ψ is a quadratic function, which is the case for the objective function of the dual formulation of the minimum enclosing ball problem. We now establish that CΦ can be similarly bounded above for the objective function Φ of the problem D even though Φ is not a quadratic function for the weighted problem. Recall that m m T Φu = u i ν i a i T a i 1 m m u u i ν i a i u i ν i a i. iν i It follows that Φu = d 2 u T ν Mu + ut Mu u T ν 2 ν, 2 Φu = 2 u T ν 3 P ump ut, where d R m and is defined as d i := ν i a i 2, i = 1,..., m, ν := [ν 1,..., ν m ] T, M := DiagνA T ADiagν, and P u := νu T u T νi. By the second mean value theorem, CΦ sup 1 u,z S 2 z ut 2 Φũz u, where ũ S is a point that lies on the line segment from u to z. Therefore, 1 CΦ sup u,z S ū T ν z 3 ut P ūmp ū T z u, where ū S is any point that lies on the line passing through x and z. The first term on the right-hand side can be bounded above by 1/min i ν i 3. Using the fact that ū = u + βz u for some β R, it follows that P ū T z u = ν T zu u T νz ν T z u + u T ν z 2max i ν i, 19

20 since u and z are on the unit simplex and have Euclidean norm at most one. Furthermore, M A 2 Diagν 2 = max i ν i 2 A 2, where denotes the operator norm of a matrix. Therefore, we obtain where τ is defined as in 18. CΦ max i ν i 4 min i ν i 3 4 A 2 = 4max i ν i A 2 τ 3, 21 By 21, we immediately obtain an upper bound of O[max i of a δ -additive core set for the weighted Euclidean one-center problem Additive versus Multiplicative Error ν i A 2 ]/τ 3 δ on the size In this section, given a feasible solution of D that has a small multiplicative or relative error with respect to the optimal value Φu, we establish a bound on the corresponding additive error. This will enable us to relate our bounds to those arising from Clarkson s results. Given ɛ > 0, Algorithm 3.2 computes a feasible solution u k S such that where δ 1 + ɛ 2 1 = Oɛ. Therefore, Φu k Φu 1 + δφu k, which implies that 22 is satisfied with an additive error δ if Φu k Φu δφu k, 22 δ δφu k. 23 We now establish an upper bound on δ independent of the function Φ in order to compute a lower bound on CΦ/δ. Note that Φu k Φu 0, = aj a j 2 2, 1 ω j + 1 ω j aj a l 2 2, 1 ω j + 1 ω l = 1/4min i ν i a j a l 2, 20

21 where j and j are defined as in Algorithm 3.1 and a l A is the point with the largest Euclidean distance from a j. It follows that 23 is satisfied if δ 1/4δmin i ν i a j a l We remark that the inequality 24 that establishes the relation between δ and δ is asymptotically tight as illustrated by the following example. Let A = { 1, 0, 1} and W = {1, 1 + κ, 1}, where κ > 0. It is easy to verify that Φu 0 = [1 + κ/2 + κ] 2 and Φu = 1. Clearly, Φu 1 + δφu 0 with δ = [1/1 + κ]2 + 1/1 + κ and Φu 0 Φu δ with δ = 2κ + 3/2 + κ 2. Therefore, both δ and the right-hand side of 24 tend to 3/4 as κ goes to zero. Next, we establish a lower bound on CΦ. Recall that CΦ := sup u,z S, y=1 αu+αz S 1 α 2 Φu + y u T Φu Φy, which implies that any feasible choices of u, y, z S will yield a lower bound on CΦ. Let u = e l, z = e j, y = z = e j α = 1, where the indices j and l are chosen such that a j A is the point with the largest weight ω j and a l A is the point with the largest Euclidean distance from a j. With these choices, we have Φu = Φy = 0. Hence, CΦ e j e l T d 2/ν l Me l + M ll /ν l 2 ν, = ν j a j 2 ν l a l 2 2/ν l M jl + 2/ν l M ll + M ll ν j /ν l 2 M ll /ν l, = ν j a j 2 ν l a l 2 2ν j a j T a l + ν l a l 2 + ν j a l 2, = ν j a j a l 2, where we used the fact that M ik = ν i ν k a i T a k. Therefore, Combining 24 with 25, it follows that CΦ ν j a j a l CΦ δ 4 τδ, which implies that Clarkson s result does not improve our upper bound of O1/τ ɛ = O1/τδ on the size of an ɛ-core set, even if a matching upper bound for CΦ could be found. 21

22 We remark that Clarkson s analysis is quite general and some of his results yield the tightest possible bounds on the size of core sets as in the case of the minimum enclosing ball problem. However, for specific problems such as the problem considered in this manuscript, our line of analysis may lead to core set bounds that are at least as good as the ones implied by his results. Furthermore, as pointed out in Clarkson, 2008, there are certain problems of the form 19 with objective functions Ψ for which CΨ is unbounded. For instance, the objective function of the dual formulation of the minimum enclosing ellipsoid problem satisfies this property. For such problems, bounds that depend on CΨ are not useful whereas the line of analysis adopted in this paper may still yield small core set results Kumar and Yıldırım, 2005; Todd and Yıldırım, These observations seem to suggest that problem-specific approaches, though narrower in scope, may lead to sharper bounds than a general-purpose approach with a much wider scope. 4. Computational Experiments In this section, we present and discuss our computational results. We implemented Algorithm 3.2 in MATLAB and conducted our computational experiments on input sets generated randomly using various distributions. Specifically, we considered the following two classes of input sets: 1. Normal distribution: Each coordinate of each input point was generated using the standard normal distribution. 2. Uniform distribution: Each coordinate of each input point was generated using the uniform distribution on the interval 0, 1. For each input point, the corresponding weight was chosen uniformly from the interval 0, 1. Our experiments were performed on a notebook computer with Intel Core 2 CPU T Ghz processor, 2GB RAM and a 120GB, 5400rpm hard drive. Our first experiment provides information about the performance of Algorithm 3.2 on instances of the weighted Euclidean one-center problem in small dimensions see Table 4. For each instance, the number of points m was set at All points were uniformly generated from the n-dimensional unit cube. We used ɛ = 10 4 in our experiments. Table 4 reports, for each dimension n, the core set sizes, CPU times, number of iterations, value of τ defined by 18, and µ defined as in Corollary 3.1 averaged over 50 runs. 22

23 Computational Results n X Time sec Iterations τ µ x x x x x x x x x Table 1: Computational results with uniform distribution for m = Table 4 reveals that Algorithm 3.2 is capable of quickly computing a highly accurate solution in small dimensions. In particular, the sizes of core sets computed by the algorithm are significantly smaller than the worst-case theoretical estimate. Furthermore, the sizes of core sets are also considerably smaller than the number of iterations, which suggests that drop-iterations may be effective in maintaining small core sets. Next, the values of τ are much smaller than the values of µ, which implies that τ can be a rather loose lower bound on µ. Therefore, the expression of the complexity results in terms of τ seems to be a gross overestimate at least for the experimental setup used in Table 4. Finally, we remark that Drezner and Gavish used essentially the same experimental settings in the implementation of their ellipsoid algorithm Drezner and Gavish, The number of iterations reported in Table 4 are noticeably smaller than their counterparts in their implementation. We find it remarkable that a simple first-order algorithm can outperform a polynomial-time algorithm in terms of the total number of iterations. Figure 1 presents several graphs in an attempt to provide further insights into the performance of Algorithm 3.2 for larger instances of the weighted Euclidean one-center problem. In particular, we aim to establish how the practical performance is affected by input parameters such as the number of points m, the dimension n, and the accuracy ɛ using two classes of input sets. Figure 1 is organized such that the vertical axis denotes the CPU time on each graph in the first column and it corresponds to the size of the core set on each graph in the second column. In each row, exactly two of the three parameters m, n, and ɛ are fixed and the third 23

24 Time in Seconds Core set size Time in Seconds Core set size Time in Seconds Core set size Time in Seconds Core set size Uniform distribution Normal Distribution Dimension Uniform distribution Normal Distribution Dimension a m = 10 4, ɛ = 10 2 b m = 10 4, ɛ = Uniform distribution Normal Distribution Dimension Uniform distribution 5.00 Normal Distribution Dimension c m = 10 4, ɛ = 10 3 d m = 10 4, ɛ = Normal Distribution Uniform distribution m Normal Distribution Uniform distribution m e n = 50, ɛ = 10 3 f n = 50, ɛ = Normal Distribution Uniform distribution i Normal Distribution Uniform distribution i g n = 50, m = 10 4, ɛ = 2 i h n = 50, m = 10 4, ɛ = 2 i Figure 1: Various experimental results from the implementation of our algorithm

25 one is varied, which is denoted in the horizontal axis. All of the data points in Figure 1 were generated by averaging the CPU times and core set sizes over 10 runs. Figures 1a and 1b present the CPU times of Algorithm 3.2 and the sizes of core sets returned by the algorithm, respectively, for different values of the dimension n using m = 10 4 and ɛ =.01. Figures 1c and 1d present the corresponding results with the same experimental setup except for the choice of.001 for ɛ. A study of these graphs reveals that the CPU times tend to increase linearly with the dimension n for both classes of input sets as predicted by the theoretical complexity results. The sizes of core sets seem to exhibit a weaker dependence on n but are usually much smaller than the corresponding worst-case estimate of O1/τɛ. Clearly, both the CPU times and the sizes of core sets increase as ɛ decreases. The CPU times and the sizes of core sets with different number of input points m are presented in Figures 1e and 1f using n = 50 and ɛ =.001. Once again, the CPU times seem to exhibit a linear dependence on m for each class of input sets. On the other hand, there seems to be no relation between the size of the core set and the number of input points. Finally, Figures 1g and 1h illustrate the relationship between the performance of Algorithm 3.2 and the accuracy ɛ using n = 50 and m = The accuracy ɛ is chosen as smaller powers of 2. Note that the horizontal axis has a logarithmic scale. As predicted by our theoretical analysis, both the CPU times and core set sizes increase as ɛ decreases for both classes of input sets. It is worth noticing that core set sizes tend to increase linearly with respect to log1/ɛ, which contributes to the efficiency of Algorithm 3.2 in practice. In conclusion, Algorithm 3.2 seems to be able to compute an approximate solution for larger instances of the weighted Euclidean one-center problem in a very reasonable amount of time. We remark that the CPU times are remarkably small. In particular, an instance of the problem with 10 6 input points in 50 dimensions can be solved in under ten seconds. Also notable is the fact that core set sizes tend to be quite small, which is one of the main driving forces behind the practical efficiency of Algorithm Concluding Remarks In this paper, we developed and analyzed an algorithm for the weighted Euclidean onecenter problem. Our algorithm explicitly computes a core set, whose size is independent of the number of points and the dimension. Our results subsume and extend some of 25

26 the previously known results for the minimum enclosing ball problem to a larger class of problems. Computational experiments reveal the efficiency of our algorithm in practice. In the near future, we intend to work on different variants of this problem such as the weighted Euclidean one-center problem with outliers. Another interesting research direction would be the investigation of the tightest bounds on the size of core sets. 6. Acknowledgments The first author was partially supported by NSF through CAREER Grant CCF , and the second author by TÜBİTAK Turkish Scientific and Technological Research Council Grant 107M411 and by Bilkent University Faculty Development Grant. References Agarwal, P., S. Har-Peled, K. Varadarajan Geometric approximations via coresets. Combinatorial and Computational Geometry - MSRI Publications Bădoiu, M., K. L. Clarkson Optimal core-sets for balls. Unpublished manuscript. Bădoiu, M., K. L. Clarkson Smaller core-sets for balls. Proceedings of the 14th Annual Symposium on Discrete Algorithms Bădoiu, M., S. Har-Peled, P. Indyk Approximate clustering via core-sets. Proceedings of 34th Annual ACM Symposium on Theory of Computing Chandrasekaran, R The weighted Euclidean 1-center problem. Operations Research Letters Clarkson, K. L Coresets, sparse greedy approximation and the Frank-Wolfe algorithm. Proceedings of the 19th Annual Symposium on Discrete Algorithms. To appear. Drezner, Z., B. Gavish ε-approximations for multidimensional weighted location problems. Operations Research Francis, R. L Some aspects of a minimax location problem. Operations Research

27 Frank, M., P. Wolfe An algorithm for quadratic programming. Naval Research Logistics Quarterly Grötschel, M., L. Lovász, A. Schrijver Geometric Algorithms and Combinatorial Optimization, Algorithms and Combinatorics, vol. 2. Springer-Verlag. Hansen, P., D. Peeters, D. Richard, J.-F. Thisse The minisum and minimax location problems revisited. Operations Research Kumar, P., J. S. B. Mitchell, E. A. Yıldırım Approximate minimum enclosing balls in high dimensions using core-sets. The ACM Journal of Experimental Algorithmics 8. Kumar, P., E. A. Yıldırım Minimum volume enclosing ellipsoids and core sets. Journal of Optimization Theory and Applications Megiddo, N The weighted Euclidean 1-center problem. Mathematics of Operations Research Megiddo, N On the ball spanned by balls. Discrete and Computational Geometry Todd, M. J., E. A. Yıldırım On Khachiyan s algorithm for the computation of minimum volume enclosing ellipsoids. Discrete Applied Mathematics Tsang, I., J. Kwok, P.-M. Cheung Very large SVM training using core vector machines. Robert G. Cowell, Zoubin Ghahramani, eds., Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Jan 6-8, 2005, Savannah Hotel, Barbados. Society for Artificial Intelligence and Statistics, Available electronically at Wolfe, P Convergence theory in nonlinear programming. J. Abadie, ed., Integer and Nonlinear Programming. North-Holland, Amsterdam, Yıldırım, E. A Two algorithms for the minimum enclosing ball problem. Tech. rep., Bilkent University, Department of Industrial Engineering, Bilkent, Ankara, Turkey. Zhou, G., K. C. Toh, B. Sun Efficient algorithms for the smallest enclosing ball problem. Computational Optimization and Applications

A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result

A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result A Linearly Convergent Linear-Time First-Order Algorithm for Support Vector Classification with a Core Set Result Piyush Kumar Department of Computer Science, Florida State University, Tallahassee, FL 32306-4530,

More information

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of

More information

Pairwise Away Steps for the Frank-Wolfe Algorithm

Pairwise Away Steps for the Frank-Wolfe Algorithm Pairwise Away Steps for the Frank-Wolfe Algorithm Héctor Allende Department of Informatics Universidad Federico Santa María, Chile hallende@inf.utfsm.cl Ricardo Ñanculef Department of Informatics Universidad

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

On the Minimum Volume Covering Ellipsoid of Ellipsoids

On the Minimum Volume Covering Ellipsoid of Ellipsoids On the Minimum Volume Covering Ellipsoid of Ellipsoids E. Alper Yıldırım January 12, 2005 Abstract We study the problem of computing a (1+ɛ)-approximation to the minimum volume covering ellipsoid of a

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids

Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids Linear Convergence of a Modified Frank-Wolfe Algorithm for Computing Minimum Volume Enclosing Ellipsoids S. Damla Ahipasaoglu Peng Sun Michael J. Todd October 5, 2006 Dedicated to the memory of Naum Shor

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu.

A First-Order Framework for Solving Ellipsoidal Inclusion and. Inclusion and Optimal Design Problems. Selin Damla Ahipa³ao lu. A First-Order Framework for Solving Ellipsoidal Inclusion and Optimal Design Problems Singapore University of Technology and Design November 23, 2012 1. Statistical Motivation Given m regression vectors

More information

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

On the von Neumann and Frank-Wolfe Algorithms with Away Steps On the von Neumann and Frank-Wolfe Algorithms with Away Steps Javier Peña Daniel Rodríguez Negar Soheili July 16, 015 Abstract The von Neumann algorithm is a simple coordinate-descent algorithm to determine

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Approximating Covering and Minimum Enclosing Balls in Hyperbolic Geometry

Approximating Covering and Minimum Enclosing Balls in Hyperbolic Geometry Approximating Covering and Minimum Enclosing Balls in Hyperbolic Geometry Frank Nielsen 1 and Gaëtan Hadjeres 1 École Polytechnique, France/Sony Computer Science Laboratories, Japan Frank.Nielsen@acm.org

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Lecture 7: Semidefinite programming

Lecture 7: Semidefinite programming CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming Zhaosong Lu Lin Xiao March 9, 2015 (Revised: May 13, 2016; December 30, 2016) Abstract We propose

More information

Sampling Contingency Tables

Sampling Contingency Tables Sampling Contingency Tables Martin Dyer Ravi Kannan John Mount February 3, 995 Introduction Given positive integers and, let be the set of arrays with nonnegative integer entries and row sums respectively

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

7. Lecture notes on the ellipsoid algorithm

7. Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Mathematical Programming manuscript No. (will be inserted by the editor) Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming Guanghui Lan Zhaosong Lu Renato D. C. Monteiro

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9947(XX)0000-0 UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS YULIAN

More information

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop

Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Navigation and Obstacle Avoidance via Backstepping for Mechanical Systems with Drift in the Closed Loop Jan Maximilian Montenbruck, Mathias Bürger, Frank Allgöwer Abstract We study backstepping controllers

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

New Class of duality models in discrete minmax fractional programming based on second-order univexities

New Class of duality models in discrete minmax fractional programming based on second-order univexities STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 5, September 017, pp 6 77. Published online in International Academic Press www.iapress.org) New Class of duality models

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

CS261: Problem Set #3

CS261: Problem Set #3 CS261: Problem Set #3 Due by 11:59 PM on Tuesday, February 23, 2016 Instructions: (1) Form a group of 1-3 students. You should turn in only one write-up for your entire group. (2) Submission instructions:

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

1.1.1 Algebraic Operations

1.1.1 Algebraic Operations 1.1.1 Algebraic Operations We need to learn how our basic algebraic operations interact. When confronted with many operations, we follow the order of operations: Parentheses Exponentials Multiplication

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

Lecture notes on the ellipsoid algorithm

Lecture notes on the ellipsoid algorithm Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points

Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points Approximating the Minimum Closest Pair Distance and Nearest Neighbor Distances of Linearly Moving Points Timothy M. Chan Zahed Rahmati Abstract Given a set of n moving points in R d, where each point moves

More information

A Note on Tiling under Tomographic Constraints

A Note on Tiling under Tomographic Constraints A Note on Tiling under Tomographic Constraints arxiv:cs/8v3 [cs.cc] 9 Apr Marek Chrobak Peter Couperus Christoph Dürr Gerhard Woeginger February, 8 Abstract Given a tiling of a D grid with several types

More information

Topological properties of Z p and Q p and Euclidean models

Topological properties of Z p and Q p and Euclidean models Topological properties of Z p and Q p and Euclidean models Samuel Trautwein, Esther Röder, Giorgio Barozzi November 3, 20 Topology of Q p vs Topology of R Both R and Q p are normed fields and complete

More information

A note on the minimal volume of almost cubic parallelepipeds

A note on the minimal volume of almost cubic parallelepipeds A note on the minimal volume of almost cubic parallelepipeds Daniele Micciancio Abstract We prove that the best way to reduce the volume of the n-dimensional unit cube by a linear transformation that maps

More information

Two-Term Disjunctions on the Second-Order Cone

Two-Term Disjunctions on the Second-Order Cone Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

The Frank-Wolfe Algorithm:

The Frank-Wolfe Algorithm: The Frank-Wolfe Algorithm: New Results, and Connections to Statistical Boosting Paul Grigas, Robert Freund, and Rahul Mazumder http://web.mit.edu/rfreund/www/talks.html Massachusetts Institute of Technology

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

The Ellipsoid Algorithm

The Ellipsoid Algorithm The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28 Introduction Outline 1 Introduction 2 Assumptions

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Strongly Polynomial Algorithm for a Class of Minimum-Cost Flow Problems with Separable Convex Objectives

Strongly Polynomial Algorithm for a Class of Minimum-Cost Flow Problems with Separable Convex Objectives Strongly Polynomial Algorithm for a Class of Minimum-Cost Flow Problems with Separable Convex Objectives László A. Végh April 12, 2013 Abstract A well-studied nonlinear extension of the minimum-cost flow

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

1 Problem Formulation

1 Problem Formulation Book Review Self-Learning Control of Finite Markov Chains by A. S. Poznyak, K. Najim, and E. Gómez-Ramírez Review by Benjamin Van Roy This book presents a collection of work on algorithms for learning

More information

A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION 1. INTRODUCTION

A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION 1. INTRODUCTION A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION GERARD AWANOU AND LEOPOLD MATAMBA MESSI ABSTRACT. We give a proof of existence of a solution to the discrete problem

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

Computing closest stable non-negative matrices

Computing closest stable non-negative matrices Computing closest stable non-negative matrices Yu. Nesterov and V.Yu. Protasov August 22, 2017 Abstract Problem of finding the closest stable matrix for a dynamical system has many applications. It is

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

A CONSTANT-FACTOR APPROXIMATION FOR MULTI-COVERING WITH DISKS

A CONSTANT-FACTOR APPROXIMATION FOR MULTI-COVERING WITH DISKS A CONSTANT-FACTOR APPROXIMATION FOR MULTI-COVERING WITH DISKS Santanu Bhowmick, Kasturi Varadarajan, and Shi-Ke Xue Abstract. We consider the following multi-covering problem with disks. We are given two

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set ON THE FACETS OF THE MIXED INTEGER KNAPSACK POLYHEDRON ALPER ATAMTÜRK Abstract. We study the mixed integer knapsack polyhedron, that is, the convex hull of the mixed integer set defined by an arbitrary

More information