A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Size: px
Start display at page:

Download "A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes"

Transcription

1 A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA August, 204 Abstract It is an open question whether there is an interior-point algorithm for linear optimization problems with a lower iteration-complexity than the classical bound O n log )). This paper provides a negative answer to that question for a variant of the Mizuno-Todd-Ye predictor-corrector algorithm. In fact, we prove that for any ɛ > 0, there is a redundant Klee-Minty cube for which the aforementioned algorithm requires n 2 ɛ) iterations to reduce the barrier parameter by at least a constant. This is provably the first case of an adaptive step interior-point algorithm, where the classical iteration-complexity upper bound is shown to be tight. Introduction The paper of Karmarkar [5] in 984 launched the field of interior-point methods IPMs). Since then, IPMs have changed the landscape of optimization theory and been extended successfully for linear, nonlinear, and conic linear optimization [0]. For linear optimization problems LO), to reduce the barrier parameter from µ to, the best known iteration-complexity upper bound is O n log µ )). In practice however, IPMs require much less iterations than predicted by the theory. It has been conjectured that the required number of iterations grows logarithmically in the number of variables [4]. Sonnevend et al. [3] showed that for two distinct special classes of LO problems, we have the

2 complexity upper bounds On 4 log µ )) and On 3 8 log µ )). Using an anticipated iterationcomplexity analysis, [6] gives an On 4 log µ )) iteration-complexity bound for the Mizuno- Todd-Ye predictor-corrector MTY P-C) algorithm. Huhn and Borgwardt [3] presents a thorough probabilistic analysis of the iteration-complexity of IPMs and establish that under the rotation-symmetry-model with certain probabilistic assumptions, the average iterationcomplexity is strongly polynomial. Another direction of research regarding the iteration-complexity of IPMs is to construct worst-case examples. Sonnevend et al. [3] showed that a variant of MTY predictor-corrector algorithm requires Ωn 3 ) iterations to reduce the duality gap by log n for certain LO problems. A similar result has been obtained by Todd et al. [5] for the primal-dual affine scaling algorithm and has been later extended by Todd and Ye [6] for long step primal-dual IPMs; they showed that these algorithms take Ωn 3 ) iterations to reduce the duality gap by a constant. In a series of papers [, 2, 8, 9], LO problems have been constructed with central paths making a large number of sharp turns with the intuitive idea that for a path-following algorithm each turn should should lead to an extra Newton step. These constructions share the common feature; that is, the dual) feasible set is a perturbed Klee-Minty KM) cube and the central path visits all the vertices of the KM cube. In [9], for instance, the authors show that the n central path makes Ω log n ) sharp turns. A curvature integral developed by [3, 4] accurately estimates the number of iterations of a variant of MTY predictor-corrector algorithm, see Section 2. This curvature integral is one of the main tools in our paper and we will refer to this curvature as Sonnevend s curvature. In this paper, we build our work upon the KM construction in [9]. The main argument of the paper can be summarized as follows: We first prove that a KM construction [9] with a carefully chosen neighborhood of the central path which depends on the dimension of the cube, visits every vertices of the cube in such a way that following the central path within that neighborhood requires an exponential number of steps. From Theorem 2., this yields a large lower bound for the Sonnevend curvature. Then by using a modified hybrid version of that construction as well as Theorem 2. once again, we are able to conclude that for any ɛ > 0, there is a redundant hybrid version of the KM cube for which the MTY predictor-corrector algorithm requires Ω n ɛ) )) 2 log where log µ = Olog n). Hence by a rigorous analysis, our modified KM construction provides the first case of an IPM, the MTY predictor-corrector algorithm, for which the classical iteration-complexity upper bound is essentially tight. 2

3 In the rest of this section, the basic terminology used in this paper is presented. Let A be an m n matrix of full rank. For c R n and b R m, we consider the standard form primal and dual linear optimization problems, min c T x max b T y s.t. Ax = b s.t. A T y + s = c ) x 0, s 0, where x, s R n, y R m are vectors of variables. Denote the sets of primal and dual feasible solutions by P = {x R n : Ax = b, x 0} and D = {y, s) R m R n : A T y +s = c, s 0}; the sets of strictly feasible primal and dual solutions by P + and D +, respectively. Without loss of generality, see e.g., [2], we may assume that P + and D +. For a parameter µ > 0 and a vector w > 0, the w-weighted path equations are given by Ax = b, x 0 A T y + s = c, s 0 xs = µw, 2) where uv denotes [u v,..., u n v n ] T for u, v R n. For w = e, with e being the all-one vector, equation 2) gives the central path equations. 2 IPMs and Sonnevend s curvature of the central path First we briefly review the relevant algorithms to this paper. Roughly speaking, pathfollowing IPMs differ by the way the barrier parameter µ + := θ)µ is chosen and for what values of µ, the Newton steps are calculated. While for short-step IPMs, we have θ = Ω n ), predictor-corrector type algorithms allow a larger θ, hence a larger reduction in µ. Given µ > 0, and β > 0, we define the β-neighborhood of the point on the central path corresponding to µ as N β, µ) := {x, s) P + D + : xs µ e β}. 3) The β-neighborhood of the central path is defined as N β) := µ>0 N β, µ). Both the algorithm of [4] and the MTY predictor-corrector algorithm use two nested neighborhoods N β 0 ) and N β ) for 0 < β 0 < β <. The MTY predictor-corrector algorithm alternates between two search directions: The predictor search direction is used within the smaller neighborhood N β 0 ) and it aims to reduce µ to zero. Let x, s) be the current iterate, x, s) the predictor search direction and x +, s + ) := x + θ x, s + θ s). The MTY 3

4 predictor-corrector algorithm and the algorithm in [4] differ in the way the value of θ is determined. In the MTY predictor-corrector algorithm, θ is determined as being the largest step for which x +, s + ) stays within the larger neighborhood N β ). In the algorithm of [4], the value of θ is determined as the largest number for which x + s + µ + ξ β, where ξ = xs. Then a pure centering step is taken which will take the iterate back to the smaller µ neighborhood N β 0 ) in such a way that the normalized duality gap µ = xt s n does not change. Both algorithms can take long steps, in fact, it is known that [, 4] as k, θ k, where θ k is the step length of the predictor direction at iteration k. For the rest of the paper, we will refer to the both algorithms as MTY predictor-corrector algorithm. Sonnevend s curvature, introduced in [3], is closely related to the iteration-complexity of a variant of the MTY predictor-corrector algorithm. Let κµ) = µẋṡ /2. Stoer et al. [4] proved that their predictor-corrector algorithm has a complexity bound, which can be expressed in terms of κµ). Theorem 2.. [4] Let the nested neighborhood parameters β 0, β of the MTY predictorcorrector algorithm satisfy β 0 + β < 2. Let N be the number of iterations of the MTY predictor-corrector algorithm to reduce the barrier parameter from µ to. Suppose κµ) ν for some constant ν > 0 on µ [, µ ]. Then for some universal constants C and C 2 that depend only on the neighborhood of the central path, we have κµ) C 3 µ dµ N C κµ) µ dµ + C 2 log Constant C 3 depends on ν as well as the neighborhood of the central path. The following proposition states the basic properties of Sonnevend s curvature. Proposition 2.2. [3] The following holds.. We have κµ) = µṡµ) sµ) µṡµ) sµ) ) 2 2. We have µṡµ) sµ) n and κµ) n implying that 2 κµ) µ dµ = O. n log )). ) ) 4

5 3 KM cube construction First we recall the KM construction in [9] and review its fundamental properties. max y m s.t. 0 y ρy k y k ρy k for k = 2... m. 0 d + y repeated h times 0 d 2 + y 2 repeated h 2 times. 0 d m + y m repeated h m times. 5) Certain variants of the simplex method take 2 m to solve this problem. The simplex path for these variants starts from 0,..., 0, )T, it visit all the vertices ordered by the decreasing value of the last coordinate y m until reaching the optimal point, which is ) the origin. As in [9], we fix ρm) := and d :=,,..., ρ, 0 ρ m ρ m 2 m 2m+). We denote the m-dimensional KM cube by KMm, ρm)). See Figure for KMm, ρm)) with m = 2. Let the slack variables s k = ρy k y k and s k = y k ρy k for k = 2,..., n with the convention s = y and s = y. There is a one-to-one correspondence between the vertices of KMm, ρm)) with the m-tuples v i {0, } m, i =,..., 2 m as follows. Each vertex of KMm, ρm)) is determined by whether exactly one of s i = 0 or s i = 0 for each i =,..., m in 5). If s i = 0, the i-th coordinate of the corresponding m-tuple in {0, } m is 0; if s i =, it is. For our purpose, we describe the relevant terms of KMm, ρm)) inductively as follows: First we describe the order of the set of the vertices Vm) of KMm, ρm)) which the simplex path visits. Note that Vm) is an encoding of the vertices of KMm, ρm)), they are not the actual vertex points in R m. For m = 2, let V2) = {v, v 2, v 3, v 4 } = {0, ),, ),, 0), 0, 0)}. 6) Figure shows the vertices of the KMm, ρm)). Then let Vm + ) = {v 2m, ), v 2m, ),..., v, ), v, 0), v 2, 0),..., v 2m, 0)}. 7) It can be shown [9] that there exists a redundant KMm, ρm)) whose central path, denoted by CPm), visits the vertices in the order given in the set Vm). Figure 2 and 3 show the central path for m = 2 and m = 3. 5

6 Figure : V2) = {v, v 2, v 3, v 4 } = {0, ),, ),, 0), 0, 0)} shows the vertices of the KMm, ρm)) cube for m = 2. Figure 2: The central path visits the vertices V2) = {v, v 2, v 3, v 4 } of the KMm, ρm)) cube for m = 2 in the given order as µ decreases. Next we define inductively a tube along the edges of the simplex path in KMm, ρm)) as follows. Let δ 4m+). Let T δ U2) = {y : R2 : s 2 δ}, Tδ L2) = {y : R2 : s 2 δ} and C δ m) = {y : R 2 : s m δ, s m δ} for m 2. Note that Tδ U2) and T δ L 2) corresponds to a tube for the upper and lower facets of KM2, ρ2)), respectively, while C δ 2) corresponds to the central part of KM2, ρ2)), see Figure. By T δ m), denote the union Tδ Lm) T δ Um) C δm). Then for m 2, define Tδ Um + ) = {y : Rm+ : s m+ δ, y,..., y m ) T δ m)} and Tδ Lm + ) = {y : Rm+ : s m+ δ, y,..., y m ) T δ m)}. Notice that Tδ U 3) is a tube that corresponds to the upper facet of KM3, ρ3)) where y 3 = ρy 2. Similarly Tδ L3) is a tube that corresponds to the lower facet of KM3, ρ3)) where y 3 = ρy 2. Also these upper and lower facets are KM2, ρ3)) cubes themselves, see Figure 3. Hence by 6

7 Figure 3: Central path in the redundant cube KMm, ρm)) cube for m = 2 Figure 4: Illustration of the tube T δ m) for m = 3. identifying the first m coordinates of y,..., y m, y m+ ) inside KMm + ), ρm + )) with y,..., y m ) KMm, ρm + )), and considering the assumption that δ is decreasing in m, we can write Tδ Um + ) T δm) and Tδ Lm + ) T δm), see Figure 4. We also define a δ-neighborhood of a vertex of KMm, ρm)) by whether exactly one of s i δ or s i δ for each i =,..., m in 5). Figure displays the δ-neighborhoods of the vertices V2) = {v, v 2, v 3, v 4 } of the KMm, ρm)) cube for m = 2. The following proposition is essentially Proposition 2.2 in [9]. Proposition 3.. In 5), one can choose the parameters in such a way that the central path CPm) in KMm, ρm)) stay inside the tube T δ m). In particular, one can choose ρ = m 2m+), δ 4m+) so that n = Om22m ). As µ decreases, the central path visits the δ-neighborhoods of the vertices given in the order by 7). Moreover, the number of inequalities n is linear in δ. Proof. See Proposition 2.2 in [9]. 7

8 Now for KMm, ρm)), we identify two regions Rδ U and Rδ L within tube T δm) in such a way that going from Rδ U to RL δ an vice versa) with line segments staying inside tube T δm) requires Ω2 m ) number of iterations. Let Rδ U := {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ} 8) and R L δ := {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ}. 9) We have the following. Proposition 3.2. For KMm, ρm)), let y U Rδ U and y L Rδ L. Then staying inside the tube T δ m), one requires at least 2 m line segments to reach y U from y L and vice versa. Proof. With the parameters chosen as in Proposition 3., we first show Tδ U m) and T L δ m) do not intersect for any m. Suppose by contradiction that there is a y Tδ Um) T δ Lm). From the definition of Tδ U m) and T L δ m), we have s m = ρy m y m δ and s m = y m ρy m δ. Adding these two inequalities, we get 2ρy m 2δ. By the choice of ρ and δ, it is easy to see that, this will lead to the contradiction y m >. Hence T U δ m) T δ L m) =. The rest of the proof is by induction on m. For m = 2, let y U R U δ and y L R L δ with δ 4m+). Then, for yu we have s = y δ and s 2 δ which implies that y 2 δ ρδ 2δ = 5 6. Analogously, for yl we have s = y δ and s 2 δ which implies y 2 δ + ρy 2δ = 6. Clearly, staying inside the tube T δ2), it takes at least 2 iterations to reach a point with y 2 6 from a point with y 2 5 6, see Figure. As inductive step, suppose that to reach any point in Rδ L from a point in RU δ with Rδ L KMm, ρm )) and Rδ U KMm, ρm )) one requires at least 2m 2 steps with line segments staying inside T δ m ). Let y U Rδ U and yl Rδ L inside T δm) KMm, ρm)). We distinguish two points p and p 2 such that p {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ} and p 2 {y KMm, ρm)) : s δ, s 2 δ,..., s m δ, s m δ}. Note that the point p belongs to the δ-neighborhood of the vertex v 2m = 0, 0,..., 0,, ) and the point p 2 belongs to the δ-neighborhood of the vertex point v 2m + = 0, 0,..., 0,, 0). 8

9 Then, using the inductive definition of Tδ U m) and T L δ m), it is easy to see that yu, p Tδ Um) and p2, y L Tδ Lm). By inductive hypothesis, one needs at least 2m 2 line segments to reach p from y U staying inside the tube Tδ Um) T δm ). Similarly one needs at least 2 m 2 line segments to reach y L from p 2 staying inside the tube Tδ Lm) T δm ). Moreover since by the first part of the proof, we have Tδ Um) T δ L m) =, it follows that to reach y L from y U, one needs to traverse within T δ m ) twice, each time requiring at least 2 m 2 steps. This proves that one requires at least 2 m line segments to reach y U from y L, hence the proof is complete. 4 Neighborhood of the KM cube central path In Section 3, we showed that with n = Om2 2m ) redundant constraints, the central path CPm) stays inside a tube T δ m). Moreover, we proved that it will take at least 2 m line segments to reach a point in Rδ L close to the optimal solution of 5) from a point in RU δ close to the analytic center of KMm, ρm)). However, path-following IPMs algorithms including the MTY predictor-corrector algorithm, use the neighborhood N β) as opposed to the tube neighborhood T δ m) we used in Section 3. In this section we analyze the N β) neighborhood for the cube KMm, ρm)) and prove that for β = Ω m+ ), we have N β) T δm). In other words, with appropriately chosen neighborhood parameters of KMm, ρm)), all the iterates of the MTY predictor-corrector algorithm stay inside the tube T δ m). Hence, we can draw the conclusion that for KMm, ρm)), the MTY predictor-corrector algorithm will require Ω2 m ) iterations with the neighborhood N β), where β = Ω m+ ). In order to find the largest β for which N β) T δ m), we will use weighted paths. The following lemma is essentially Lemma 4. in [4]. Lemma 4.. Fix µ and let w > 0 such that w e ɛ. Let xw), yw), sw)) denote the w-weighted path which is the solution set of 2). Let s i = s i w) s i, where the s i values are the coordinates of the central path point for i =,..., n. Then we have s i s i 2ɛ for i =,..., n. When we apply the information in Lemma 4. to KMm, ρm)), we obtain the following result. Lemma 4.2. There exists a KMm, ρm)) with n = Om2 2m ) such that all the w-weighted paths with w e β := δ 4 stay inside the tube T δm) with δ 4m+). Proof. Let δ 4m+). Then, from Proposition 3., we know that there exists KMm, ρm)) with n = Om2 2m ) so that the central path stays inside the tube T δ m). Choose β = δ 4 for 2 9

10 KMm, ρm)) so that w e β. Since for all the slacks, we have s i or s i, Lemma 4. implies that s i w) s i + δ 2 and s iw) s i + δ 2. Then whenever s i δ 2 or s i δ 2, we have s i w) δ and s i w) δ. Since a tube T δ m) with a general δ inside KMm, ρm)) is determined by these slacks, it follows that all w-weighted paths stay inside the tube T δ m) with δ 4m+). This concludes the proof. Next lemma proves a result analogous to Lemma 4.2 tailored for R U δ and RL δ. Lemma 4.3. Let δ 4m+) and fix β := δ 4. Suppose that yµ ) R U δ/2 for some µ. Then N β, µ ) R U δ. Similarly if for some, y ) R L δ/2, then N β, ) R L δ. Proof. Suppose that for some µ, yµ ) R U δ/2, i.e., s δ 2, s 2 δ 2,..., s m δ 2, s m δ 2. Let y N β, µ ). Then, for w := xs µ, we have w e β. Since for all the slacks in KMm, ρm)), we have s i or s i, Lemma 4. implies that s i w) s i + δ 2 and s i w) s i + δ 2. Then whenever s i δ 2 or s i δ 2, we have s iw) δ and s i w) δ. This proves y Rδ U, which implies N β, µ ) Rδ U. The proof of the rest of the claim is similar. In the rest of this section, we aim to find an interval [, µ ] and an upper bound for log µ ) such that the neighborhoods N β, µ ) Rδ U and N β, ) Rδ L for some δ and β. Let δ 4m+) and y µ ),..., y m µ )) be a central path CPm) point such that s = δ 2, s 2 δ 2,..., s m δ 2. Note that any point satisfying s = δ 2, s 2 δ 2,..., s m δ 2 is inside the δ 2-neighborhood of the vertex point 0, 0,..., 0, ), hence Proposition 3. guarantees the existence of a central path point y µ ),..., y m µ )). Then, by using Theorem 3.7 in [9], one can show that µ ρm δ 2. Let us fix µ = ρm δ 2 and let β := δ 4. Then Lemma 4.3 implies that the neighborhood N β, µ ) stays inside the region Rδ U. Hence any point inside the neighborhood N β, µ ) also stays inside the region R U δ. Next, we will find a such that the neighborhood N β, ) is within the region R L δ. Let y ),..., y m )) be the central path point such that y m ) = ρm δ 2. Note that since the objective function in 5) is y m, a central point satisfying y m µ) = ρm δ 2 exists and is unique. Since from 5), we have ρy i y i+ for i =,..., m ), we obtain y µ) δ 2, y 2µ) δ 2,..., y mµ) δ 2, which in turn implies that s µ) δ 2, s 2µ) δ 2,..., s mµ) δ 2. Then, using Lemma 4.3 once again, we conclude that the neighborhood N β, ) stays inside the region Rδ L for β = δ 4. For the central path 2), the duality gap ct xµ) b T yµ) = nµ. It is well-known see e.g.,[2]) that b T yµ) is monotonically increasing and c T xµ) is monotonically decreasing along the central path. In our case, b T yµ) = y m µ) is increasing to 0 and c T xµ) is monotonically decreasing to 0, i.e., c T xµ) > 0 for all µ > 0. Then nµ = c T xµ) b T yµ) > 0

11 y m implies that µ > ym n for any point on the central path. Hence for the central path point for which y m µ) = ρm δ 2, it follows that > ρm δ 2n. Then using the fact that n = Om22m ), we have log µ ) = Om). The following corollary summarizes our findings. Corollary 4.4. Let the neighborhood parameters be given as β 0 < β = 6m+) for the MTY predictor-corrector algorithm. Then there exists a KMm, ρm)) with n = Om2 2m ) for which MTY predictor-corrector algorithm requires at least Ω2 m ) predictor steps to reduce the barrier parameter from µ to where log µ ) = Om). Proof. Let δ := 4m+) and β = δ 4 = 6m+). We know from Lemma 4.2 that, there exists a KMm, ρm)) with n = Om2 2m ) such that N β) T δ m). Lemma 4.3 shows that there is an interval [, µ ] such that the neighborhoods N β, µ ) Rδ U and N β, ) Rδ L. Hence starting from an iterate x, y, s ) and µ such that x, y, s ) N β, µ ) Rδ U, in order to reach an iterate x 0, y 0, s 0 ) and such that x 0, y 0, s 0 ) N β, ) Rδ L ; Proposition 3.2 and Proposition 4.2 imply that one needs Ω2 m ) steps. Since the number of corrector steps is constant, it follows that the number of predictor steps is Ω2 m ). Moreover the discussion after Lemma 4.3 proves that, we can choose the interval [, µ ] so that log µ ) = Om). This completes the proof. 5 A worst-case iteration-complexity lower bound for the Sonnevend curvature In Section 4, we proved that the MTY predictor-corrector algorithm requires Ω2 m ) iterations using the larger neighborhood N β ) with β = Ω m+ ). Our goal, in this section, is to derive a lower bound for the Sonnevend curvature using the tools from the previous section. To this end, we need to examine the constants in Theorem 2. more closely. Lemma 5.. Let β be the large neighborhood constant so that β and N be the number 400 of iterations of the MTY predictor-corrector algorithm to reduce the barrier parameter from µ to. Then N 4 2 κµ) β µ dµ + 2 log + β 4 ) log Proof. See Theorem 2.4 and its proof in [4]. ). 0) The next theorem shows that on the interval [, µ ], the total Sonnevend curvature is in comparable order to the number of sharp turns of the central path.

12 Theorem 5.2. There is an integer m 0 > 0 such that for any m m 0, there exists a KMm, ρm)) and interval [, µ ] such that the Sonnevend curvature satisfies ) κµ) n µ dµ 8 ) log n + log. log n logn + ) log2) Proof. Let β = 6m+) and choose the parameters of KMm, ρm)) as ρ = m 2m+) δ = 8m+) so that n = Om22m ). Write n = τm2 2m for some constant τ > 0 and we calculate log ) and = log n = log τ + log m + 2m. This shows that for large enough m, log ) = Om). Since we can extend the interval [, µ ] so that it still includes all the ) sharp turns, we will assume that log = Θm). Then Corollary 4.4 applies and we have N 2 m. Now using the bound log + ω) log 2)ω for 0 ω, we get from 0) 2 log + β 4 ) 8 m +. log 2 Using the fact that m log n, a straightforward calculation shows that The proof is complete. κµ) µ dµ log ) = Ω ) n 8 log n +. ) log n logn + ) log2) Corollary 5.3. For any ɛ > 0, there is an integer m 0 > 0 such that for any m m 0, there κµ) exists a KMm, ρm)) and interval [, µ ] such that µ dµ n ɛ) ) 2 log, where ) log = Om). Proof. The claim follows from Theorem 5.2 for large m. Remark 5.4. Corollary 5.3 yields a negative answer to the question raised by [7], i.e., ) ) whether there exists an α < 2 with log κµ) = Ω) such that µ dµ nα log for the class of LO problems. 6 An iteration-complexity lower bound for MTY predictorcorrector algorithm with constant neighborhood opening In practice, the MTY predictor-corrector algorithm operates in a larger neighborhood where β is a constant. In order to conclude an iteration-complexity lower bound for MTY predictorcorrector algorithm with constant neighborhood opening β by using Theorem 2., we need 2

13 to show that there is a constant ν > 0 with κµ) ν for µ [, µ ] for KMm, ρm)). While this appears to hold numerically, proving it is much more difficult. To go around this difficulty, we exploit a trick introduced by [3]. The idea is to use one dimensional LO problems, where it is easier to calculate the central path and its corresponding κµ); and to use LO problems with scaled objectives with block diagonal constraints. For the details, we refer the reader to Appendix section 8. Recall that by Corollary 5.3, we know there exists a KMm, ρm)) and an interval [, µ ] κµ) such that µ dµ n ɛ) ) 2 log. Here n = Om2 2m ) and µ µ = Olog n). Now by 0 using Lemma 8.4 and Proposition 8.2, we can embed KMm, ρm)) in a block diagonal LO problem at the expense of increasing the size of the problem by at most n := n+om+log m). Denote by KMm) this hybrid construction with KMm, ρm)) embedded in. Since n = On), we have the following: Theorem 6.. For any ɛ > 0, there exists a positive integer m 0 such that for any m m 0, there exists an LO problem KMm) and an interval [, µ ] with the following properties: µ = O m2 2m). Let β 0 < β 400 be the constant neighborhood N β) parameters. Then the MTY predictor-corrector algorithm on this neighborhood requires Ω n ɛ) )) 2 log predictor steps. Proof. Consider the KMm, ρm)) cube from Corollary 5.3. Then by using Lemma 8.4 and Proposition 8.2, we can embed KMm, ρm)) in a block diagonal LO problem with size n := n + Om + log m) and m = Om). Note that since the interval [, µ ] comes from KMm, ρm)), the first claim in the theorem follows from Corollary 5.3. Also, since for KMm), there exists a constant ν > 0 for all µ [, µ ] with the corresponding κµ) ν, Theorem 2. implies the first claim. This completes the proof. 7 Conclusion and future work It is an open question whether there is an interior-point algorithm for LO problems with On α log µ )) iteration-complexity upper bound for α < 2 to reduce the barrier parameter from µ to. In this regard, a related open question raised by Stoer et al. [4], was ) whether there is an α < 2 with κµ) µ dµ nα log for all LO problems. This paper provides a negative answer to the latter question. We also show that for the MTY 3

14 predictor-corrector algorithm, the classical iteration-complexity upper bound is tight. Future work would be to investigate whether an analogous result could be derived to the case of long step IPMs. In this paper we establish that for the central path of the carefully constructed redundant Klee-Minty cubes, both the geometric curvature and the Sonnevend curvature of the central path are essentially in the order of Ω n). In a recent work, Mut and Terlaky [7] show the existence of another class of LO problems where a large geometric curvature of the central path implies a large Sonnevend curvature. These two important cases suggest that it might be possible to prove this implication in a more general setting. 8 Appendix Lemma 8.. For large enough r, there is -dimensional LO problem with r + ) constraints for which τ r κµ) τ2 r for any µ [α, α 2 ], where α = r r 4 some constants τ, τ 2 0. and α 2 = r r for Proof. Consider the problem min{ y : y and, y 0 counted r times}. The construction is given in [3], p:55. Consider the interval [α, α 2 ], where α = r r 4 s 0 µ) = yµ). Then it is shown in [3], p:55 that, ṡ 0 µ) s 0 µ) and α 2 = r r. Let r2 3 r on [α, α 2 ]. This implies µṡ 0 µ) s 0 µ) = Ω r) on [α, α 2 ]. Then, from Proposition 2.2 part., we have κµ) = Ω r) for all µ [α, α 2 ]. The proof is complete. Proposition 8.2. Consider the LO problems min c ) T x s.t. A x = b and min c 2 ) T x s.t. A 2 x 2 = b 2 2) x 0, x 2 0, with the corresponding κ µ) and κ 2 µ) on the interval [, µ ]. Then for the problem min c T x s.t. Ax = b x 0, [ ] [ c b with the corresponding κµ) where c =, b = we have κµ) κ i µ) for i =, 2. c 2 b 2 ] [ A 0 and A = 0 A 2 ] 3), on [, µ ], 4

15 Proof. Let x µ), y µ), s µ) ) and x 2 µ), y 2 µ), s 2 µ) ) be the central paths in 2). Then the term κµ) for the combined problem 3) becomes κµ) = [µẋ ṡ, µẋ 2 ṡ 2 ] 2 i =, 2. κ i µ) for Proposition 8.3. Let η > 0 and consider the central path 2) and its κµ). Let Â, ˆb, ĉ) be another problem instance, where Â, ˆb, ĉ) = A, b η, c) with its corresponding ˆκµ). Then, we have ˆκµ) = κηµ), µ [ µ0 η, µ ]. 4) η Proof. Using 2), it is straightforward to verify that the central path ˆxµ), ŷµ), ŝµ)) of the new problem satisfies ˆxµ) = xηµ), ŷµ) = yηµ) and ŝµ) = sηµ). Using the definition of η κµ), we get ˆκµ) = κηµ). Hence the claim follows. Lemma 8.4. Given an interval [, µ ] and a constant ν > 0, there exists an LO problem ) of size n = Θ log µ ) such that κµ) ν for all µ [, µ ]. The hidden constant in ) n = Θ log µ ) depends on ν. Proof. Let a constant ν > 0 and an interval [, µ ] be given. For the given ν > 0, by Lemma 8., there exists an LO problem with its κµ) ν on an interval µ [α, α 2 ]. By applying Proposition 8.3 for η := α α2 α ) iµ0 for i = 0,,..., k, we find k ) scaled LO problems with their corresponding κ i µ), i = 0,,..., k such that κ i µ) = κηµ) on [ µ α 2 α ) i, α 2 α ) i+ ], for i = 0,,..., k. Then by using Proposition 8.2, we can obtain a block diagonal LO problem with its κµ) κ i µ) ν for i = 0,,..., k for any µ [ ) k, α2 α µ0 ]. In order to have κµ) ν for any µ [, µ ], it is then enough to α2 have ) k α µ0 µ. This is true if and only if k log ) α log ). Since by Lemma 8., the ratio α 2 α is a constant depending only on the given ν, the number of blocks k needed is ) Θ log α 2 α ). Also since the size of the LO problem with its κµ) is a constant only determined ) by ν, the size of the problem is n = Θ log µ ) to achieve κµ) ν for all µ [, µ ]. This completes the proof. α2 References [] Antoine Deza, Eissa Nematollahi, Reza Peyghami, and Tamás Terlaky. The central path visits all the vertices of the Klee Minty cube. Optimisation Methods and Software, 25):85 865, [2] Antoine Deza, Eissa Nematollahi, and Tamás Terlaky. How good are interior point methods? Klee Minty cubes tighten iteration-complexity bounds. Mathematical Programming, 3): 4, [3] Petra Huhn and Karl Heinz Borgwardt. Interior-point methods: worst case and average case analysis of a phase-i algorithm and a termination procedure. Journal of Complexity, 83):833 90,

16 [4] B Jansen, C Roos, and T Terlaky. A short survey on ten years interior point methods. Technical Report 95 45, Delft, The Netherlands, 995. [5] N Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 44): , 984. [6] S Mizuno, MJ Todd, and Y Ye. Anticipated behavior of path-following algorithms for linear programming. Technical Report 878, School of Operations Research and Industrial Engineering, Ithaca, New York, 989. [7] Murat Mut and Tamás Terlaky. An analogue of the Klee-Walkup result for Sonnevends curvature of the central path. Technical report, Lehigh University Department of Industrial and Systems Engineering, 203. Also available as [8] Eissa Nematollahi and Tamás Terlaky. A redundant Klee Minty construction with all the redundant constraints touching the feasible region. Operations Research Letters, 364):44 48, [9] Eissa Nematollahi and Tamás Terlaky. A simpler and tighter redundant Klee Minty construction. Optimization Letters, 23):403 44, [0] Yurii Nesterov and Arkadii Nemirovskii. Interior-point Polynomial Algorithms in Convex Programming, volume 3. SIAM, 994. [] Florian A Potra. A quadratically convergent predictor corrector method for solving linear programs from infeasible starting points. Mathematical Programming, 67-3): , 994. [2] Cornelis Roos, Tamás Terlaky, and Jean-Philippe Vial. Interior Point Methods for Linear Optimization. New York: Springer, [3] György Sonnevend, Joseph Stoer, and Gongyun Zhao. On the complexity of following the central path of linear programs by linear extrapolation II. Mathematical Programming, 52: , 99. [4] Joseph Stoer and Gongyun Zhao. Estimating the complexity of a class of path-following methods for solving linear programs by curvature integrals. Applied Mathematics and Optimization, 27:85 03, 993. [5] Michael J Todd. A lower bound on the number of iterations of primal-dual interior-point methods for linear programming. Technical report, Cornell University Operations Research and Industrial Engineering, 993. [6] Michael J. Todd and Yinyu Ye. A lower bound on the number of iterations of long-step primal-dual linear programming algorithms. Annals of Operations Research, 62): , 996. [7] Gongyun Zhao. On the relationship between the curvature integral and the complexity of path-following methods in linear programming. SIAM Journal on Optimization, 6):57 73,

Curvature as a Complexity Bound in Interior-Point Methods

Curvature as a Complexity Bound in Interior-Point Methods Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube.

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube. McMaster University Avance Optimization Laboratory Title: The Central Path Visits all the Vertices of the Klee-Minty Cube Authors: Antoine Deza, Eissa Nematollahi, Reza Peyghami an Tamás Terlaky AvOl-Report

More information

The continuous d-step conjecture for polytopes

The continuous d-step conjecture for polytopes The continuous d-step conjecture for polytopes Antoine Deza, Tamás Terlaky and Yuriy Zinchenko September, 2007 Abstract The curvature of a polytope, defined as the largest possible total curvature of the

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Central path curvature and iteration-complexity for redundant Klee-Minty cubes

Central path curvature and iteration-complexity for redundant Klee-Minty cubes Central path curvature and iteration-complexity for redundant Klee-Minty cubes Antoine Deza, Tamás Terlaky, and Yuriy Zinchenko Advanced Optimization Laboratory, Department of Computing and Software, zinchen@mcmaster.ca

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

A Continuous d-step Conjecture for Polytopes

A Continuous d-step Conjecture for Polytopes Discrete Comput Geom (2009) 4: 38 327 DOI 0.007/s00454-008-9096-4 A Continuous d-step Conjecture for Polytopes Antoine Deza Tamás Terlaky Yuriy Zinchenko Received: September 2007 / Revised: 25 May 2008

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

On the Number of Solutions Generated by the Simplex Method for LP

On the Number of Solutions Generated by the Simplex Method for LP Workshop 1 on Large Scale Conic Optimization IMS (NUS) On the Number of Solutions Generated by the Simplex Method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology November 19 23,

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

arxiv: v1 [math.oc] 21 Jan 2019

arxiv: v1 [math.oc] 21 Jan 2019 STATUS DETERMINATION BY INTERIOR-POINT METHODS FOR CONVEX OPTIMIZATION PROBLEMS IN DOMAIN-DRIVEN FORM MEHDI KARIMI AND LEVENT TUNÇEL arxiv:1901.07084v1 [math.oc] 21 Jan 2019 Abstract. We study the geometry

More information

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES YU. E. NESTEROV AND M. J. TODD Abstract. In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point

More information

An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms

An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms An Example with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms Dan Li and Tamás Terlaky Department of Industrial and Systems Engineering, Lehigh University, USA ISE Technical

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE

More information

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization M. J. Todd January 16, 2003 Abstract We study interior-point methods for optimization problems in the case of infeasibility

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

On the number of distinct solutions generated by the simplex method for LP

On the number of distinct solutions generated by the simplex method for LP Retrospective Workshop Fields Institute Toronto, Ontario, Canada On the number of distinct solutions generated by the simplex method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10589-005-3911-0 On Two Measures of Problem Instance Complexity and their Correlation with the Performance

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear Programming Jie Wang University of Massachusetts Lowell Department of Computer Science J. Wang (UMass Lowell) Linear Programming 1 / 47 Linear function: f (x 1, x 2,..., x n ) = n a i x i, i=1 where

More information

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Mathematical Programming manuscript No. will be inserted by the editor) Florian A. Potra Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

Lines With Many Points On Both Sides

Lines With Many Points On Both Sides Lines With Many Points On Both Sides Rom Pinchasi Hebrew University of Jerusalem and Massachusetts Institute of Technology September 13, 2002 Abstract Let G be a finite set of points in the plane. A line

More information

Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming

Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming John Dunagan Microsoft Research Daniel A Spielman Department of Computer Science Program in Applied Mathematics

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014

Convex optimization. Javier Peña Carnegie Mellon University. Universidad de los Andes Bogotá, Colombia September 2014 Convex optimization Javier Peña Carnegie Mellon University Universidad de los Andes Bogotá, Colombia September 2014 1 / 41 Convex optimization Problem of the form where Q R n convex set: min x f(x) x Q,

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE BETSY STOVALL Abstract. This result sharpens the bilinear to linear deduction of Lee and Vargas for extension estimates on the hyperbolic paraboloid

More information

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the

More information

Lecture 15: October 15

Lecture 15: October 15 10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information