A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Size: px
Start display at page:

Download "A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization"

Transcription

1 A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University of Technology P.O.Box 503, 600 GA Delft, The Netherlands Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada, L8S 4L7 Abstract We propose a new class of primal-dual methods for linear optimization LO. By using some new analysis tools, we prove that the large update method for LO based on the new search direction has a polynomial complexity O n 4 4+ρ log n ε iterations where ρ [0, ] is a parameter used in the system defining the search direction. If ρ = 0, our results reproduce the well known complexity of the standard primal dual Newton method for LO. At each iteration, our algorithm needs only to solve a linear equation system. An extension of the algorithms to semidefinite optimization is also presented. Keywords: Linear Optimization, Semidefinite Optimization, Interior Point Method, Primal- Dual Newton Method, Polynomial Complexity. AMS Subject Classification: 90C05 Introduction Interior point methods IPMs are among the most effective methods for solving wide classes of optimization problems. Since the seminal work of Karmarkar [7], many researchers have proposed and analyzed various IPMs for Linear and Semidefinite Optimization LO and SDO and a large amount of results have been reported. For a survey we refer to recent books on the subject [7], [], [4]. An interesting fact is that almost all known polynomial-time variants of IPMs use the so-called central path [8] as a guideline to the optimal set, and some variant of Newton s method to follow the central path approximately. Therefore, the theoretical analysis of IPMs consists for a great deal of analyzing Newton s method. At present there is still a

2 gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for so-called primal-dual large-update methods, which are the most efficient methods in practice see, e.g. Andersen et al. []. The aim of this paper is to present a new class of primal dual Newton-type algorithms for LO and SDO. To be more concrete we need to go into more detail at this stage. We consider first the following linear optimization problem: P min{c T x : Ax = b, x 0}, where A R m n satisfies ranka = m, b R m, c R n, and its dual problem D max{b T y : A T y + s = c, s 0}. We assume that both P and D satisfy the interior point condition IPC, i.e., there exists x 0, s 0, y 0 such that Ax 0 = b, x 0 > 0, A T y 0 + s 0 = c, s 0 > 0. It is well known that the IPC can be assumed without loss of generality. For this and some other properties mentioned below, see, e.g., [7]. Finding an optimal solution of P and D is equivalent to solving the following system. Ax = b, x 0, A T y + s = c, s 0, xs = 0. Here xs denotes the coordinatewise product of the vectors x and s. The basic idea of primal-dual IPMs is to replace the third equation in, the so-called complementarity condition for P and D by the parameterized equation xs = µe, where e denotes the all-one vector and µ > 0. Thus we consider the system Ax = b, x 0, A T y + s = c, s 0, xs = µe. If the IPC holds, then for each µ > 0, the parameterized system has a unique solution. This solution is denoted as xµ, yµ, sµ and we call xµ the µ-center of P and yµ, sµ the µ-center of D. The set of µ-centers with µ running through all positive real numbers gives a homotopy path, which is called the central path of P and D. The relevance of the central path for LO was recognized first by Sonnevend [8] and Megiddo [0]. If µ 0 then the limit of the central path exists and since the limit points satisfy the complementarity condition xs = 0, the limit yields optimal solutions for P and D. IPMs follow the central path approximately. Let us briefly indicate how this goes. Without loss of generality we assume that xµ, yµ, sµ is known for some positive µ. We first update µ to µ := θµ, for some θ 0,. Then we solve the following well-defined Newton system A x = 0, A T y + s = 0, 3 s x + x s = µe xs,

3 and get a unique search direction x, s, y. By taking a step along the search direction where the step size is defined by some line search rules, one constructs a new triple x, y, s that is close to xµ, yµ, sµ. This process is repeated until the point x, y, s is in a certian neighborhood of the central path. Then µ is again reduced by the factor θ and we apply Newton s method targeting at the new µ-centers, and so on. We continue this process until µ is small enough. Most practical algorithms then construct a basic solution and produce an optimal basic solution by crossing-over to the Simplex method. An alternative way is to apply a rounding procedure as described by Ye [3] see also Mehrotra and Ye [] [7]. The choice of the parameter θ plays an important role both in the theory and practice of IPMs. Usually, if θ is a constant independent of n, for instance θ =, then we call the algorithm a large update or long-step method. If θ depends on n such as θ = n, then the algorithm is named a small update or short-step method. It is now known that small update methods have an O n log n ε iteration bound and the large update ones have a worse case iteration bound as On log n ε [7,, 4]. The reason for the worse bound for large update methods is that, in both the analysis and implementation of large update IPMs, we usually use some proximities or potential functions to control the iteration, and up to now we can only prove that the proximity or the potential function has at least a constant decrease after one step. For instance, considering the primal dual Newton method, after one step the proximity δ used in this paper satisfies δ+ δ β for some constant β [6]. On the other hand, contrary to the theoretical results, large update IPMs work much more efficient in practice than small update methods []. Several authors have suggested to use so called higher-order methods to improve the complexity of large update IPMs [4, 6, 4, 5, 6]. Then, at each iteration, one solves some additional equations based on the higher-order approximations to the system. The motivation of this work is to improve the complexity of large update IPMs. Different from the higher-order approach, we reconsider the Newton system for, keeping in mind that our target is to get closer to the µ-center. Now let us focus on the Newton step. For notational convenience, we introduce the following notations: v := xs µ µ, v := xs ; 4 d x := v x x, d s := v s s ; 5 d x := x x, ds := s s. 6 Using the above notations, one can state the central condition in as v = v = e. Denote d v = d x + d s, the last equation in 3 is equivalent to d v = v v. Observe that we can also decompose the above system into two systems as the predictor direction which is obtained by solving d v P red = v, and the corrector direction by d v Corr = v. Here δ and δ + denote the proximity before and after one step,respectively. 3

4 The corrector direction serves the purpose of centering, it points towards the analytic center of the feasible set, while the predictor direction aims to decrease the duality gap. It is straightforward to verify that d v i 0 for all the components v i and d v i > 0 for the components v i <. This means that if v i < then the Newton step increases v i and v i decreases whenever v i > to get more close to the µ-center. It is reasonable to expect that if we can increase the small components and decrease the large components of v more, we might arrive our target the µ-center faster. Motivated by this observation, we reconsider the right hand side of the equation defining the corrector direction according to the current point v. The new corrector direction is defined by d v Corr = v ρ, ρ 0, thus yielding a new system as follows Ād x = 0, Ā T y + d s = 0, 7 d x + d s = v ρ v, where Ā = AV X, V = diag v, X = diag x and ρ 0 is a parameter. In this work we consider only the case that the parameter ρ is restricted to the interval [0, ]. Note that if ρ = 0, the new system is identical to the standard Newton system. It may be clear from the above description that in the analysis of IPMs we need to keep control on the distance from the current iterates to the current µ-centers. In other words, we need to quantify the distance from the vector xs to the vector µe in some proximity measure. In fact, the choice of the proximity measure is crucial for both the quality and the elegance of the analysis. The proximity measure we use here is defined as follows. δxs, µ := v v. 8 Note that the measure vanishes if xs = µe and is positive otherwise. An interesting observation is that in the special case of ρ =, the right hand side of the third term in the system 7 represents the negative gradient of the proximity measure δ in the v-space. When solving this system with ρ =, we get the steepest descent direction for the proximity measure along which the proximity can be driven to zero. As we will see later, after one step using the new search direction, the proximity will decrease at least as large as βδ 3 where β is a constant. Consequently we get an improvement over the complexity of the algorithm. We also mention that in [6], the authors have shown that δ+ δ βδ after one feasible standard Newton step if v min, which is exactly the same as we will state in Lemma 3.8 of this work. However, we failed to prove a similar inequality for the case v min < and hence could not improve the complexity of the large update IPM in [6]. The measure δ, up to a factor, was introduced by Jansen et al. [5], and thoroughly used in [7], Zhao [6], more recently [6]. Its SDO analogue was also used in the analysis of interior point methods for semidefinite optimization [3]. We notice that variants of the proximity δxs, µ had been used by Kojima et al. in [9] and Mizuno et al. in [3]. The paper is organized as follows. First, in Section, we present some technical results which will be used in our analysis later. In Section 3 we analyze the method with damped step and show that it has an O n 4 4+ρ log n ε polynomial iteration bounds. In Section 4 we discuss an extension of the new primal dual algorithm for SDO and study its complexity. Finally we close this paper by some concluding remarks in Section 5. 4

5 A few words about the notations. Throughout the paper, denotes the -norm for vectors and the Frobenius norm for matrices while denotes the infinity norm. For any x = x, x,, x n T R n, x min = minx, x,, x n or x max is the component of x which takes the minimal or maximal value. For any symmetric matrix G, we also define λ min G or λ max G the minimal or maximal eigenvalue of G. Furthermore we also assume that the eigenvalues of G are listed according to the order of their absolute values such that λ G λ G λ n G. If G is positive semidefinite, then it holds 0 λ min G = λ n G, λ max G = λ G. For any symmetric matrix G, we also denote G = GG. For two symmetric matrices G, H, the relation G H means H G is positive semidefinite, or equivalently H G 0. Technical Results As we stated in the introduction, a key issue in the analysis of an interior point method, particularly for a large update IPM, is the decreasing property of a positive sequence of the proximity measure values. This is crucial for the complexity of the algorithm. In this section we consider a general positive decreasing sequence. First we give a technical result which will be used in our later analysis. Lemma. Suppose that α. Then αt t α, t [0, ]; 9 αt t α, t 0. 0 If α α > 0, then t t α t t α, t > 0. Proof: The inequality 9 follows since for any fixed α, the function t α + αt is increasing for t [0, ] and zero if t = 0. When α, the function t α α t is convex with respect to t 0 and has a global minimum zero at t =, which gives 0. One can easily verify. An equivalent statement of 9 is Now we can state our main result in this section. t α t α αt α, t, α. Proposition. Suppose that t 0 > is a constant. Suppose {t k, k = 0,, } is a sequence satisfying the following inequalities with γ [0, and β > 0. Then one has t k+ t k βt γ k, k = 0,,, 3 for all k t γ 0 β γ. t k+ 0, 5

6 Proof: First we note if β t γ 0, then step is sufficient. Hence we can assume without loss of generality that 0 < β < t γ 0. We begin the proof by considering the simple case that β = and τ = 0. It follows from 3 that Assume the current point t k, we get t γ k γ γ = γ γ γ γ t k+ t k t γ k. 4 t γ k = t k t γ k t k+, γ t γ k γ γ γ t γ γ k γ γ where the first inequality follows from. The above inequality further implies Hence, since β =, after at most t γ k+ t γ k t γ 0 γ γ. general case that β. Let us define a new sequence by Then the inequality 3 gives steps we have that t k+ 0. Now we turn to the t k = t k β γ, k = 0,, t k+ t k t γ k. t By our discussion about the special case that β =, we know that after at most γ 0 γ steps it holds t k+ 0, which is equivalent to say that, after at most steps we get t k+ 0. This completes the proof of the proposition. t γ 0 β γ 3 New Primal Dual Methods for LO and Their Complexity In the present section we discuss the new primal dual Newton methods for LO and study the complexity of the large update algorithm. The section consists of four parts. In the first subsection we describe the new algorithm. In the second subsection we estimate the magnitude of the search direction and the maximum value of a feasible step size. The third subsection is devoted to estimate the proximity measure after one step. The complexity of the algorithm is given in the last subsection. 3. The algorithm Let x, y, s denote the solution of the following modified Newton equation system for the parameterized system : A x = 0, A T y + s = 0, 5 s x + x s = µ + ρ e xs. xs ρ 6

7 This is the Newton system for the equation xs = µ µe ρ xs. Note that on the central path it holds xs µ = e = µ ρ xs. From the definitions of v, d x, d s one can easily check that the above system is equivalent to 7. Recall cf. Chapter 7 in [7] that since ranka = m, for any µ > 0, the above equation system has a unique solution x, y, s. The result of a damped Newton step with damping factor α is denoted as x + = x + α x, y + = y + α y, s + = s + α s. 6 In the algorithm we use a threshold value τ for the proximity and we assume that we are given a triple x 0, y 0, s 0 such that δx 0 s 0, µ 0 τ for µ 0 =. This can be done without loss of generality cf. [7]. If, for the current iterates x, y, s and barrier parameter value µ the proximity δxs, µ exceeds τ then we use one or more damped Newton steps to recenter while keeping µ temporarily fixed; otherwise µ is reduced by the factor θ. This is repeated until nµ < ε. Thus the algorithm can be stated as follows. Large Update Primal-Dual Algorithm for LO Input: A proximity parameter τ; an accuracy parameter ε > 0; a variable damping factor α; a fixed barrier update parameter θ, 0 < θ < ; x 0, s 0 and µ 0 = such that δx 0 s 0 ; µ 0 τ. begin x := x 0 ; s := s 0 ; µ := µ 0 ; while nµ ε do begin µ := θµ; while δxs; µ τ do Solve the system 5, begin x := x + α x; s := s + α s; y := y + α y end end end Remark 3. The damping parameter α has to be taken such that the proximity measure function δ decreases sufficiently. In the next section we determine a default value for α. 7

8 3. Magnitude of the search direction and feasible step size First recall that the proximity δxs, µ before the step satisfies δxs, µ = We have, using 5 and 6, Defining v v = e T v v = e T v + v e. 7 x + s + = x + α x s + α s = xs + α x s + s x + α x s = xs + α µ + ρ e xs + α x s. xs ρ v + = x+ s + µ, and using that xs = µv and x s = µd x d s we obtain v + = x +s + µ = v + α v ρ v + α d x d s. 8 Since the displacements x and s are orthogonal, the scaled displacements d x and d s are orthogonal as well. Hence we have d T x d s = 0. 9 Thus we get the following expression for the proximity after the step. δx + s +, µ = e T v+ + e e = e T v + α v + v ρ v + α d x d s + = e v T + α v ρ v + e v + α v ρ v + α e d x d s. 0 e v + α v ρ v + α d x d s e In the sequel we will denote δxs, µ simply as δ and δx + s +, µ simply as δ +. Recall that the term d x d s represents the second order effect in the Newton step. It may be worthwhile to consider the case where this term is zero, i.e., when the Newton process is exact. In that case after a step of size α, x + s + /µ is given by wα := v + α v ρ v = αv ρ + αv. We have the following result. Lemma 3. If d x d s = 0 then δ+ αδ + α v ρ v ρ, α [0, ]. Particularly, if α = ρ+, then δ + ρδ ρ +. 8

9 Proof: Defining If d x d s = 0 then δ+ = e v T + α v ρ v e + v + α v ρ v e. χt := t +, t > 0, t one may easily verify that χt is strictly convex on its domain and minimal at t = where χ = 0. Moreover it holds χt = χ t. It follows δ = χ vi = Therefore, since w i α = v i + αv ρ i χ v i v i = αv ρ i, δ+ = χ w i α. 3 + αvi, we may write δ+ = χ w i α αχvi + αχ v ρ i. This proves the first statement of the lemma. To prove the second conclusion of the lemma, we observe first that, when the Newton process is exact, the step size α = ρ+ is feasible. We need only to consider the case ρ > 0. It follows from δ+ ρ = vi + vi + ρ + ρ + v ρ i ρ ρ + v i 4 ρ + + ρ + ρvi + v ρ i = ρ ρ + δ + ρ + v ρ i ρ ρ + v i 4 ρ + + ρ + ρvi + v ρ i ρ ρ + δ + ρ + v ρ i ρ ρ + ρ v ρ i + 4 ρ + + ρ + ρvi + v ρ i = ρ ρ + ρ + δ + ρ ρ + ρ + δ + ρ ρ + δ, ρv i + v ρ i ρ + v ρ i + v ρ i where the first and second inequalities are given by 0 in Lemma. where that α = ρ, and the last one implied by the fact that v ρ i + v ρ i for all i =,,, n. The proof of the lemma is finished. Now we consider the practical case where the Newton step is not exact. This is the case where the vector d x d s is nonzero. According to 8, after a step of size α we then have see 4-6 v + = x +s + µ = v e + αv d x e + αv d s = v e + α d x e + α d s. 4 Hence the maximal feasible step size is determined by the vector d x, d s. For notation convenience, we also define n σ := v i vi 3 v i v ρ i. 5 9

10 Lemma 3.3 Let δ and σ be defined by 8 and 5 respectively. It holds that σ δ for all ρ 0, and if ρ [0, ] then σ d x, d s δ. Proof: Combining 7 with 9 we obtain d x, d s = d T x d x + d T s d s = d x + d s T d x + d s = v v ρ. Since ρ [0, ], it follows from in Lemma. that σ = v i v 3 i v i v ρ i v i v i v i v ρ i = dx, d s = δ, which yields the second statement of the lemma. Skipping the first inequality and the second equality in the above proof gives the first conclusion of the lemma. Our next lemma estimates the norm v in terms of σ. We have Lemma 3.4 Let σ be defined by 5. Then v + σ 4+ρ. Proof: First note that v = v min. Hence it suffices to show that v min + σ 4+ρ. This is trivial if v min. Now we consider the case that v min <. From 5 we derive σ vi 3 v i v ρ i v i, i =,, n, which further implies, since ρ [0, ], σ vmin 3 v min v ρ min v min = v 4+ρ min. v 4+ρ min v 4 ρ min v 4+ρ min v ρ min The statement of the lemma follows directly from the above inequality. Now we are ready to state our main result in this section. Lemma 3.5 Let d x, d s be defined by 6. Then it holds d x, d s σ + σ 4+ρ. Furthermore, the maximal feasible step size α max satisfies α max σ + σ 4+ρ. 0

11 Proof: Since the current point x, y, s is strictly feasible, from 6 one can easily see that the step size α is feasible if and only if both the vectors e + α d x and e + α d s are strictly feasible. It follows d x, d s = dx v, d s d x, d s σ σ + σ 4+ρ, v v min v min where the last two inequalities follow from Lemma 3.3 and Lemma 3.4, which concludes the statements of the lemma. 3.3 Estimate of the proximity after a step We estimate the decreasing value of the proximity after one step in this section. Let d x = d x, d x,..., d n x T and similarly for d s. We define the difference between the proximity before and after one step as a function of α, i.e., fα = δ+ δ. From 9 and the definitions of δ 8 and δ + 0, we derive fα = αv ρ i vi + vi + αv ρ i vi + α d i xd i s vi = αv ρ i v i + v i + α d xi + α d si. 6 Obviously fα is a twice continuously differentiable function of α if the step size α is feasible. Our next result says that in the interval [0, α max the function fα is a convex function of α, where α max is the maximal feasible step size. Lemma 3.6 Let the function fα be defined by 6 and let the parameter α [0, α max. Then fα is convex. Furthermore, it holds v i d x i + α d xi 3 + α d si + ds i + α d si 3 + α d f α; 7 xi and that f α 3 v i d x i + α d xi 3 + α d si + ds i + α d si 3 + α d. 8 xi Proof: f α = By direct algebraic calculus, we have v i d x i + α d xi 3 + α d si + d xi dsi + α d xi + α d si + d s i + α d si 3 + α d xi By using the well-known inequality t t t + t, we get d xi dsi + α d xi + α d si dx i + α d xi 3 + α d si + ds i + α d si 3 + α d xi,.

12 which implies the statements of the lemma. Denote ω i = Lemma 3.5, dx i + d s i and ω = ω,..., ω n. Obviously it holds ω = d x, d s. From ω σ v min σ + σ 4+ρ. 9 Now recalling 7 and 8 we can conclude that for any α [0, α max, v i ωi + αω 4 f α 3 v i ωi αω 4 3ω vmin αω4. 30 A direct calculation gives f 0 = σ. It follows from 30 and the convexity of fα that where fα f 0α + 3ω vmin = f 0α + ω vmin α ξ 0 0 α 0 ζω 4 dζdξ ξω 3 = σ α + f α, 3 f α = σ α + ω v min α 0 ξω 3 It is easy to see that f α is also convex and twice differentiable in the interval [0, α max. We are interested in the point α > 0 at which the function f α has the value zero. By direct calculus, one has f α = σ α ωα + αω where = ωα v min This means the function f α = 0 if α = ω v min v min dξ dξ. αω αω η, 3 + 4η + 9 η + η = σ vmin. 33 ω = 3 + η 4η + 9 ωη For this α we have fα 3 + η 4η + 9 4ωη + σ. Now we are in a position to state our main result in this section.

13 Theorem 3.7 Let the function fα be defined by 6 with δ. Then the step size α = 3+η 4η+9 ωη+ defined by 33 is feasible. Moreover it holds fα f 0α 30 δ ρ 4+ρ. Proof: The first part of the theorem follows directly from inequality 3 and the choice of α. The second conclusion of the theorem depends on several technical results which will be derived below. We first discuss the case that v min. Lemma 3.8 Let the function fα be defined by 6 with δ and let the step size defined by 34. If v min, then fα < 5 3 δ < 30 δ ρ 4+ρ. Proof: Since v min, it follows from 9 that ω σ. By the choice of η we have that η = σ v min ω Now recalling the fact that δ, it follows from Lemma 3.3 that which implies η σ δ, σ. 35 The above inequality gives α ω fα f 0α 5 3 σ σ 5 3 σ 5 3 δ. 36 It is easy to verify the second inequality in the lemma. This completes the proof of the lemma. In what follows we consider the case that v min <. First we want to estimate the constant η and the step size α in Theorem 3.7. Lemma 3.9 Let the constants η be defined by 33 and α by 34. If δ then it holds and η + σ ρ 4+ρ, 37 α 5 σ + σ ρ 4 ρ

14 Proof: From Lemma 3.4, 9 and 33 we obtain η = σ vmin ω σ + σ σv 3 min σ + σ 6 4+ρ ρ + σ 4+ρ ρ + σ 4+ρ, where the last two inequalities are implied by the fact σ δ and because the function gt = is increasing with respect to t for t 0. This proves the first inequality in the lemma. t +t Now we consider the second inequality. From 34 we derive that α = 3 + η 4η + 9 ωη + η = ω 3 + η + 4η + 9 = ω 3η + + 4η + 9η ω + σ ρ 4+ρ σ + σ ρ 4 ρ+4, 5ω + σ ρ 4+ρ ω 6η + 3 where the first inequality follows by direct calculus and the second one from 37, the third is true since + σ ρ 4+ρ when δ, and the last given by 9. The proof of the lemma is finished. Now we can prove the following lemma. Lemma 3.0 Let the function fα be defined by 6 with δ and let the step size defined by 34 and v min <. Then it holds fα f 0α 30 δ ρ 4+ρ. Proof: From 38 we obtain fα f 0α = ασ 5 = 5 σ ρ 4+ρ 30 δ ρ 4+ρ, σ + σ ρ 4 ρ+4 ρ 4 + σ ρ+4 30 σ ρ 4+ρ where the third inequality is true since + σ ρ 4 ρ+4 when δ, and the last by Lemma 3.3. This completes the proof for the lemma. Theorem 3.7 follows from Lemma 3.8 and Lemma

15 3.4 Complexity of the algorithm In this section we derive an upper bound for the number of iterations of the algorithm if in each step the damping factor α is as in Theorem 3.7, namely α = 3+η 4η+9 ωη+. Then each damped Newton step reduces the squared proximity by at least 4+ρ which depends on the current proximity. We first recall a lemma that estimates the proximity after a barrier parameter update. 30 δ ρ Lemma 3. Let x, y, s be strictly feasible and µ > 0. If µ + = θµ then δxs, µ + δxs, µ + θ n θ. Proof: The lemma is a slight modification of Lemma IV.36 page 359 in [7]. The only difference exists in the definition of δ where δ = v v in this work while δ = v v in [7], hence the details are omitted here. Lemma 3. Let δxs, µ τ and τ. Then after an update of the barrier parameter no more than ρ τ + θ n 4+ρ θ iterations are needed to recenter, namely to reach δxs, µ + τ again. Proof: By Lemma 3., after the update, δxs, µ + τ + θ n. θ Each damped Newton step decreases δ by at least 4+ρ. It follows from Proposition. that after at most ρ 8 30 τ + θ n 4+ρ ρ 4+ρ θ = 54 + ρ τ + θ n 4+ρ θ 30 δ ρ inner iterations, the proximity will have passed the threshold value τ. This implies the lemma. Theorem 3.3 If τ, the total number of iterations required by the primal-dual Newton algorithm is no more than 54 + ρ 8 τ + θ n 4+ρ θ θ log n ε. 5

16 Proof: It can easily be shown that the number of barrier parameter updates is given by cf. Lemma II.7, page 6, in [7] θ log n. ε Multiplication of this number by the bound in Lemma 3. yields the theorem. For large update IPMs, omitting the round off brackets in Theorem 3.3 does not change the order of magnitude of the iteration bound. Hence we may safely consider the following expression as an upper bound for the number of iterations in the case τ = O n, θ 0, independent of n, and ρ = : O n 3 log n ε This gives the best bound known for large-update methods with large neighborhoods. Note that if ρ = 0, we obtain the to date best known complexity bounds for both small and large update methods. Moreover, our analysis allows extremely agressive updates of µ while preserving On log n ɛ complexity. For instance, we may take θ = n, τ = O n and ρ =, resulting in On log n ɛ complexity bound.. 4 New Primal Dual Algorithms for Semidefinite Optimization In this section we consider an extension of the algorithms posed in the previous section to the case of SDO. We consider the SDO given in the following standard form: SDO min Tr CX Tr A i X = b i i m, X 0, and its dual problem SDD max b T y m y i A i + S = C, S 0. Here C and A i i m are symmetric n n matrices, and b, y IR m. Furthermore, X 0 means that X is symmetric positive semidefinite. The matrices A i are assumed to be linearly independent. SDO is a generalization of LO where all the matrices A i and C are diagonal which implies S is automatically diagonal and so X might also be assumed to be diagonal. The concept of the central path can also be extended to SDO. We assume both the SDO and its dual SDD are strictly feasible. The central path for SDO is defined by the solution sets {Xµ, yµ, Sµ, µ > 0} of the following system Tr A i X = b i, i =,..., m, m y i A i + S = C, 39 XS = µe, X, S 0, where E denotes the n n identity matrix and µ > 0. Suppose the point X, y, S is strictly feasible, so X 0 and S 0. Newton s method amounts to linearizing the system 39, thus 6

17 yielding the following equation Tr A i X = 0, i =,, m, m y i A i = S 40 X S + XS = µe XS. A crucial observation for SDO is that the above Newton system might have no symmetric solution X. Many researchers have proposed different ways of symmetrizing the third equation in the Newton system so that the new system have a unique symmetric solution [0, ]. In this paper we consider the symmetrization scheme that yields the NT direction [5, ]. Let us define the matrix P = X X SX X = S S XS S, 4 and D = P. The matrix D can be used to rescale X and S to the same matrix V defined by [3, 5, 0, 9] V := D XD = DSD. 4 µ µ Obviously the matrices D and V are symmetric, and positive definite. Also defining Ā i := DA i D; i =,, m; D X := µ D XD, D S := µ D SD. 43 Then the NT search direction can be written as the solution of the following system Tr Ā i D X = 0, i =,, m, m y i Ā i = D S 44 D X + D S = V V. Similarly to the case of LO, the new search direction we suggest for SDO is a slight modification of the NT direction, which is defined by the solution of the following system Tr Ā i D X = 0, i =,, m, m y i Ā i = D S 45 D X + D S = V ρ V, where we choose ρ [0, ]. Then X and S can be calculated from 43. Due to the orthogonality of X and S, it is trivial to see that The proximity measure we use here is where is the Frobenius norm. The algorithm can be stated as follows. Tr D X D S = Tr D S D X = δ XS, µ := V V, 47 7

18 Large Update Primal-Dual Algorithm for SDO Input: A proximity parameter τ; an accuracy parameter ε > 0; a variable damping factor α; a fixed barrier update parameter θ 0, ; a strictly feasible X 0, S 0 and µ 0 > 0 such that δx 0 S 0 ; µ 0 τ. begin X := X 0 ; S := S 0 ; µ := µ 0 ; while nµ ε do begin µ := θµ; while δx, S; µ τ do Solve the system 45, begin X := X + α X; S := S + α S; y := y + α y; end end end Now we begin to estimate the decrease of the proximity after one step. Let us define σ = Tr V V 3 V V ρ. 48 Note that the matrices V V 3 and V V ρ commute. Hence they admit a similarity transformation that simultaneously diagonalizes both matrices. Then by using similar arguments as in the LO case, one can easily derive the following result Lemma 4. Let δ and σ be defined by 47 and 48 respectively. It holds σ D X, D S δ, and V = λmax V = + σ 4+ρ. 49 λ min V Defining δ + as the proximity measure after a feasible step, we have δ + = Tr V + αd X V + αd S n + Tr V + αd X V + αd S = Tr V + αtr V D X + D S n + Tr V + αd X V + αd S, 50 8

19 where the equality follows from 46. Our goal is to estimate the decreasing value of fα = δ + δ. 5 for a feasible step size α. The main difficulty in the estimation of the function fα is to evaluate its first and second derivatives. For this we need some knowledge about matrix functions []page Suppose the matrix functions Gt, Ht are differentiable and nonsingular at the point t. Then we have d dt G t = G t[ d dt Gt]G t; 5 d d dt Tr Gt = Tr dt Gt, 53 d dt GtHt = [ d dt Gt]Ht + Gt[ d Ht]. 54 dt The following inequalities about matrix eigenvalues are also necessary for our analysis [] 3.3.5, page 83, a,page 9. Suppose that the matrices G, H are symmetric, we have The inequality 56 also implies that Tr GH λ i Gλ i H, 55 λ i GH min λ Gλ i H, λ Hλ i G. 56 Tr GH min λ G λ i H, λ H λ i G. 57 Particularly, if G G and H 0, then it holds Tr G H Tr G H. 58 In fact, for symmetric matrices G, H, it is not difficult to prove the following result which is a refinement of 55. Lemma 4. Suppose both G and H are symmetric. Then it holds Tr GH Tr G H. 59 Proof: The proof is inductive. First we observe that for any orthogonal matrix Q, it holds Tr QGHQ T = Tr GH. Premultipling by an orthogonal matrix Q and postmultipling by its transpose Q T if necessary, we can assume without loss of generality that G is diagonal. Now recalling the definition of the operator for matrices, we can claim that H H which implies H i,i H i,i for all i =,,, n. It follows Tr GH = G i,i H i,i G i,i H i,i G i,i H i,i = Tr G H. This completes the proof of the lemma. 9

20 Using 5, 53 and 54 we obtain that f 0 = σ. Now we are going to estimate the second derivative f α of fα. For notation convenience we also define D x = V DX V, Ds = V DS V. 60 We insert here a technical result about the norms of the matrices D x and D s. Lemma 4.3 Let the matrices D x and D s are defined by 60. Then it holds Dx + Ds σ + σ 4 4+ρ. Proof: From 56 we conclude that λ i D x λ min V λ id X, and It follows Dx + Ds = n = λ i D s λ i D x + λ i D s λ min V λ id S. λ min V λ min V D X + D S = σ + σ 4 4+ρ, λ i D X + λ i D X λ min V D X + D S where the third equality given by 46, and the last inequality follows from the definition of σ and 49. Note that the last term in 50 can be written as Tr V E + α Dx V E + α D s V. A useful observation is that the matrices D x and E + α D x commute, and similarly D s and E + α D s. Now we are ready to state one of our main results in this section. Lemma 4.4 Suppose the step size α is strictly feasible. Then it holds f 3 λ i α D x λ min V αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx. 0

21 Proof: By applying 5, 53 and 54 to the function fα, we obtain f α = Tr V E + α Dx 3 D x V E + α D s V +Tr V E + α Dx V E + α D s 3 D s V +Tr V E + α Dx Dx V E + α D s Ds V. 6 We proceed by considering the first term in the above formulae. By the definition of the operator for a matrix, we conclude that which equivalent to E + α D x E α Dx, E + α D s E α Ds, E + α D x E α Dx, E + α D s E α Ds. Hence E + α D x 3 D x = D x E + α D x 3 Dx D x E + α D x 3 Dx = Dx E + α Dx 3 Dx E α Dx 3. Since Dx = D x, one has V E + α Dx 3 D x V V E α Dx 3 D x V. It follows Tr V E + α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E α Ds V, where the inequalities given by 58. Now recalling 56 we can claim that and λ i [V E α Dx 3 D x V λ ] i D x λ min V [ αλ i Dx, i =,, n; ] 3 λ i [V E α Ds V ] λ min V αλ i Dx, i =,, n. The above two inequalities, combining with 55 yield Tr V E + α Dx 3 D x V E + α D s V Tr V E α Dx 3 D x V E α Ds V λ min V λ i D x αλ i Dx 3 αλ i Ds.

22 Similarly we have Tr V E + α Ds 3 D s V E + α D x V Tr V E α Ds 3 D s V E α Dx V λ min V λ i D s αλ i Ds 3 αλ i Dx. The proof of the Lemma will be finished if we can show that the last term in 6 satisfies the following inequality Tr V E + α Dx Dx V E + α D s Ds V λ i D x λ min V αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx. Using the definition of the operator again, one can easily see that V E + α Dx Dx V V E + α Dx Dx V V E α Dx Dx V, V E + α Ds Ds V V E + α Ds Ds V V E α Ds Ds V. These two inequalities, together with 55 and Lemma 4. give Tr V E + α Dx Dx V E + α D s Ds V V Tr E + α Dx Dx V V E + α Ds Ds V Tr V E α Dx Dx V E α Ds Ds V λ i D x λ i D s λ min V αλ i Dx αλ i Ds λ min V λ i D x αλ i Dx 3 αλ i Ds + λ i D s αλ i Ds 3 αλ i Dx where the last inequality follows from Cauchy inequality. This completes the proof of Lemma 4.4. Similar to the LO case, we also define ω i = direct consequence of Lemma 4.4 is f α 6 λ i D x + λ i D x and ω = ω, ω,, ω n. A 3ω λ min V αω 4,, which is the same as in the case of LO, except that v min is replaced by λ min V. f 0 = σ, we can use the same arguments as in the LO case to get Because fα σ α + f α, 63

23 where f α = σ α + and f α has the value zero at the point where α = ω ω λ min V + 4η + 9 η + α 0 ξω 3 = 3 + η 4η + 9 ωη + dξ,, 64 For this α we have η = σ λ min V. 65 ω fα 3 + η 4η + 9 4ωη + σ. Our next result estimates the above-defined constant η and the step size α in terms of σ. Lemma 4.5 Let the constants η be defined by 65 and α by 64. If δ then it holds and η + σ ρ 4+ρ, 66 α 5 σ + σ ρ 4 ρ Proof: First note that from Lemma 4.3 and its proof we obtain ω σ σ + σ 4+ρ. 68 λ min V The above relation means η = σ λ min V ω σ ω σ if λ min V. Hence 66 is true whenever λ min V. Now we consider the case λ min V <. Using 68, together with Lemma 4., 64 and 65, and by following an analogous process as in the proof of 37 in Lemma 3.9, one gets the desirable inequality 66. By using 66 and 68, and following similar arguments as in the proof of Lemma 3.9, one can easily prove 67. This completes the proof of the lemma. Now we can state our another main result in this section, which is a direct consequence of 63 and Lemma 4.5. Theorem 4.6 Let the function fα be defined by 5 with δ. Then the step size α = 3+η 4η+9 ωη+ defined by 65 is feasible. Moreover it holds fα f 0α 30 δ ρ 4+ρ. 3

24 We proceed to estimate the complexity of the algorithm. If the damping factor α is defined as in Theorem 4.6, namely α = 3+η 4η+9 ωη+ proximity by at least δxs,µ+θ n θ 30 δ ρ. Then each damped Newton step reduces the squared 4+ρ. From Lemma 3. we know that the proximity δxs, µ + after the update of µ. It follows from Lemma 3. that after an update of the barrier parameter no more than ρ τ + θ n 4+ρ θ iterations are needed to reach δxs, µ + τ again. This means the algorithm has an polynomial complexity bound ρ 8 τ + θ n 4+ρ θ θ log n ε For large update IPMs, omitting the round off brackets in the above estimation, one can see that the complexity of our algorithm for SDO is in the order of O n 4 4+ρ log n ε. 5 Concluding Remarks A new class of search directions was proposed for solving LO and SDO problems. The new directions are a slight modification of the classical Newton direction. By using some new analysis tools, we proved that the large update method based on the new direction has a complexity of n 4 4+ρ log n ε. It is worthwhile to note that a simple idea, to change slightly the corrector order O direction improves the complexity of the algorithm. This gives rise to some interesting issues. The first issue is whether the new algorithm works well in practice and how to incorporate the idea in the paper into the implementation of IPMs? For instance, in the Mehrotra-type predictor-corrector algorithm [], the target we used is µe or µe for SDO, what will happen if we replace this by the new target v ρ? We also mention that we have implemented a simple version of our algorithm, and tested on few problems. Our preliminary numerical results show that the algorithm is promising. approach. Nevertheless, much more work is needed to test the new The second question is related to the proximity and the search direction. As we claimed in the introduction: The proximity is crucial for both the quality and elegance of the analysis. In the preparation of this work, we tried to give a proof of the complexity of the algorithm based on the logarithmic barrier approach. However, the complexity we obtained is not as good as we presented here. As we observed in the introduction, when ρ =, the right hand side of the equation system defining the new search direction is the negative gradient of the proximity we used. It is also of interests to note that the right hand side defining the classical Newton direction is the negative gradient of the proximity based on logarithmic barrier approach. This indicates some interrelations between the search direction and the proximity used in the analysis. Will a new analysis based on a new proximity give a better complexity for the standard Newton method? Are there new IPMs with large updates whose complexity is equal to or less than the best known complexity for IPMs? The complexity of our algorithm depends on the parameter ρ. In this work we were forced to restrict ourselves to the case ρ [0, ]. If we could remove 4

25 this restriction, we might be able to approach the best known complexity bound as close as we wish by letting ρ goes to infinity. This will be a topic for further research. Our third question is about the algorithm for SDO. The new search direction in this paper is based on the NT symmetrizing scheme: is it possible to design similar algorithms using other schemes? What is the complexity of these new algorithms? This is a topic deserving more research. Lastly we would like to mention that there are also other ways to extend our results here, for instance to study the local convergence properties of these algorithms. This is particularly important since usually, when the point is close to the solution set, then the performance of a algorithm will be determined by its local convergence properties. It may be also worthwhile to build similar algorithms for classes of linear complementarity problems and convex programming. References [] E.D. Andersen, J. Gondzio, Cs. Mészáros, and X. Xu. Implementation of interior point methods for large scale linear programming. In T. Terlaky, editor, Interior Point Methods of Mathematical Programming, pages Kluwer Academic Publishers, Dordrecht, The Netherlands, 996. [] R.A. Horn, C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 99. [3] E. de Klerk. Interior Point Methods for Semidefinite Programming. Ph.D. Thesis, Faculty of ITS/TWI, Delft University of Technology, The Netherlands, 997. [4] P. Hung and Y. Ye. An asymptotically O nl-iteration path-following linear programming algorithm that uses long steps. Siam J. on Optimization, 6: ,996. [5] B. Jansen, C. Roos, T. Terlaky, and J.-Ph. Vial. Primal-dual algorithms for linear programming based on the logarithmic barrier method. Journal of Optimization Theory and Applications, 83: 6, 994. [6] B. Jansen, C. Roos, T. Terlaky, and Y. Ye. Improved complexity using higher order correctors for primal-dual Dikin affine scaling, Mathematical Programming, Series B, 76:7 30, 997. [7] N.K. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4: , 984. [8] M. Kojima, S. Mizuno, and A. Yoshise. A primal-dual interior point algorithm for linear programming. In N. Megiddo, editor, Progress in Mathematical Programming: Interior Point and Related Methods, pages Springer Verlag, New York, 989. [9] M. Kojima, N. Megiddo, T. Noma and A. Yoshise. A unified approach to interior point algorithms for linear complementarity problems, volume 538 of Lecture Notes in Computer Science. Springer Verlag, Berlin, Germany, 99. [0] N. Megiddo. Pathways to the optimal set in linear programming. In N. Megiddo, editor, Progress in Mathematical Programming: Interior Point and Related Methods, pages Springer Verlag, New York, 989. Identical version in : Proceedings of the 6th Mathematical Programming Symposium of Japan, Nagoya, Japan, pages 35,

26 [] S. Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on Optimization, 4:575 60, 99. [] S. Mehrotra and Y. Ye. On finding the optimal facet of linear programs. Mathematical Programming, 6:497 55, 993. [3] S. Mizuno and A. Nagasawa. A primal-dual affine scaling potential reduction algorithm for linear programming. Mathematical Programming, 6:9-3, 993. [4] R.D.C. Monteiro, I. Adler and M.G.C. Resende. A polynomial-time primal-dual affine scaling algorithm for linear and convex quadratic programming and its power series. Mathematics of Operations Research, 5:9 4, 990. [5] Y.E. Nesterov and M.J. Todd. Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations Research, : 4, 997. [6] J. Peng, C. Roos and T. Terlaky. New complexity analysis of the primal-dual Newton method for linear optimization. Technical Report No , Faculty of Technical Mathematics and Information, Delft University of Technology, The Netherlands, 998. To appear in Annals of Operations Research [7] C. Roos, T. Terlaky, and J.-Ph.Vial. Theory and Algorithms for Linear Optimization. An Interior Approach. John Wiley & Sons, Chichester, UK, 997. [8] G. Sonnevend. An analytic center for polyhedrons and new classes of global algorithms for linear smooth, convex programming. In A. Prekopa, J. Szelezsan, and B. Strazicky, editors, System Modelling and Optimization : Proceedings of the th IFIP-Conference held in Budapest, Hungary, September 985, volume 84 of Lecture Notes in Control and Information Sciences, pages Springer Verlag, Berlin, West Germany, 986. [9] J.F. Sturm, S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming, Technical Report 9554/A, Tinbergen Institute, Erasmus University, Rotterdam, The Netherlands, 995. [0] M.J. Todd, A study of search directions in primal-dual interior-point methods for semidefinite programming, Technical Report 05, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY 4853, October 997. [] M.J. Todd, K.C. Toh and R.H. Tütüncü, On the Nesterov-Todd direction in semidefinite programming, SIAM J. Optimization, 8998, pp [] S. J. Wright. Primal-Dual Interior-Point Methods. SIAM, Philadelphia, USA, 997. [3] Y. Ye. On the finite convergence of interior-point algorithms for linear programming. Mathematical Programming, 57:35 335, 99. [4] Y. Ye. Interior Point Algorithms, Theory and Analysis. John Wiley & Sons, Chichester, UK, 997. [5] L.L. Zhang and Y. Zhang. On polynomiality of the Mehrotra-type predictor-corrector interior-point algorithms. Mathematical Programming, 68:303-38, 995. [6] G.Y. Zhao. Interior point algorithms for linear complementarity problems based on large neighborhoods of the central path. SIAM J. on Optimization, 8:397-43,

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

An EP theorem for dual linear complementarity problems

An EP theorem for dual linear complementarity problems An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Curvature as a Complexity Bound in Interior-Point Methods

Curvature as a Complexity Bound in Interior-Point Methods Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES YU. E. NESTEROV AND M. J. TODD Abstract. In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order

More information

Applications of the Inverse Theta Number in Stable Set Problems

Applications of the Inverse Theta Number in Stable Set Problems Acta Cybernetica 21 (2014) 481 494. Applications of the Inverse Theta Number in Stable Set Problems Miklós Ujvári Abstract In the paper we introduce a semidefinite upper bound on the square of the stability

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 95120-6099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube.

McMaster University. Advanced Optimization Laboratory. Title: The Central Path Visits all the Vertices of the Klee-Minty Cube. McMaster University Avance Optimization Laboratory Title: The Central Path Visits all the Vertices of the Klee-Minty Cube Authors: Antoine Deza, Eissa Nematollahi, Reza Peyghami an Tamás Terlaky AvOl-Report

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

Lecture 14: Primal-Dual Interior-Point Method

Lecture 14: Primal-Dual Interior-Point Method CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: Primal-Dual Interior-Point Method Disclaimer: Please tell me any mistake you noticed. In this and

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

2 The SDP problem and preliminary discussion

2 The SDP problem and preliminary discussion Int. J. Contemp. Math. Sciences, Vol. 6, 2011, no. 25, 1231-1236 Polynomial Convergence of Predictor-Corrector Algorithms for SDP Based on the KSH Family of Directions Feixiang Chen College of Mathematics

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Lecture 18 Oct. 30, 2014

Lecture 18 Oct. 30, 2014 CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information