A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Size: px
Start display at page:

Download "A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization"

Transcription

1 A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics Delft University of Technology P.O. Box 5031, 600 GA Delft, The Netherlands Abstract We present a full-newton step infeasible interior-point algorithm. It is shown that at most On) inner) iterations suffice to reduce the duality gap and the residuals by the factor 1 e. The bound coincides with the best known bound for infeasible interior-point algorithms. It is conjectured that further investigation will improve the above bound to O n). Keywords: Linear optimization, infeasible interior-point method, primal-dual method, polynomial complexity. AMS Subject Classification: 90C05, 90C51 1 Introduction Interior-Point Methods IPMs) for solving Linear Optimization LO) problems were initiated by Karmarkar [6]. They not only have polynomial complexity, but are also highly efficient in practice. One may distinguish between feasible IPMs and infeasible IPMs IIPMs). Feasible IPMs start with a strictly feasible interior point and maintain feasibility during the solution process. An elegant and theoretically sound method to find a strictly feasible starting point is to use a self-dual embedding model, by introducing artificial variables. This technique was presented first by Ye et al. [9]. Subsequent references are [1, 16, 7]. Well-known commercial software packages are based on this approach, for example MOSEK [] 1 and SeDuMi [19] are based on the use of the self-dual model. Also the leading commercial linear optimization package CPLEX 3 includes the self-dual embedding model as a possible option. 1 MOSEK: SeDuMi: 3 CPLEX: 1

2 Most of the existing software packages use an IIPM. IIPMs start with an arbitrary positive point and feasibility is reached as optimality is approached. The first IIPMs were proposed by Lustig [9] and Tanabe [0]. Global convergence was shown by Kojima et. al. [7], whereas Zhang [30] proved an On L) iteration bound for IIPMs under certain conditions. Mizuno [10] introduced a primal-dual IIPM and proved global convergence of the algorithm. Other relevant refrences are [4, 5, 8, 11, 1, 14, 15, 18, 1, 4, 5] A detailed discussion and analysis of IIPMs can be found in the book by Wright [6] and, with less detail, in the books by Ye [8] and Vanderbei []. The performance of existing IIPMs highly depends on the choice of the starting point, which makes these methods less robust than the methods using the self-dual embedding technique. As usual, we consider the linear optimization LO) problem in the standard form with its dual problem P) min { c T x : Ax = b, x 0 }, D) max { b T y : A T y + s = b, s 0 }. Here A R m n, b, y R m and c, x, s R n. Without loss of generality we assume that rank A) = m. The vectors x, y and and s are the vectors of variables. The best know iteration bound for IIPMs, { x max 0 ) T s 0, b Ax 0, c A T y 0 s 0 } O n log. 1) ε Here x 0 > 0, y 0 and s 0 > 0 denote the starting points, and b Ax 0 and c A T y 0 s 0 are the initial primal and dual residue vectors, respectively, whereas ε is an upper bound for the duality gap and the norms of residual vectors upon termination of the algorithm. It is assumed in this result that there exists an optimal solution x, y, s ) such that x ; s ) ζ, and the initial iterates are x 0, y 0, s 0 ) = ζe,0, e). Up till 003, the search directions used in all primal-dual IIPMs were computed from the linear system A x = b Ax A T y + s = c A T y s s x + x s = µe xs, ) which yields tho so-called primal-dual Newton search directions x, y and s. Recently, Salahi et al. used a so-called self-regular proximity function to define a new search direction for IIPMS [17]. Their modification only involves the third equation in the above system. The iteration bound of their method does not improve the bound in 1). To introduce the idea underlying the algorithm presented in this paper we make some remarks with a historical flavor. In feasible IPMs, feasibility of the iterates is given, the ultimate goal is to get iterates that are optimal. There is a well known IPM that aims to reach optimality in one step, namely the affine-scaling method. But everybody who is familiar with IPMs knows that this does not yield a polynomial-time method. The last two decades have made it very clear that to get a polynomial-time method one should be less greedy, and work with a search direction that moves the iterates only slowly in the direction of optimality. The reason is that

3 only then one can take full profit of the efficiency of Newton s method, which is the working horse in all IPMs. In IIPMs, the iterates are not feasible, and apart from reaching optimality one needs to strive for feasibility. This is reflected by the choice of the search direction, as defined by ). Because, when moving from x to x + := x + x the new iterate x + satisfies the primal feasibility constraints, except possibly the nonnegativity constraint. In fact, in general x + will have negative components, and, to keep the iterate positive, one is forced to take a damped step of the form x + := x + α x, where α < 1 denotes the step size. But this should ring a bell. The same phenomenon occurred with the affine-scaling method in feasible IPMs. There it has become clear that the best complexity results hold for methods that are much less greedy and that use full Newton steps with α = 1). Striving to reach feasibility in one step might be too optimistic, and may deteriorate the overall behavior of a method. One may better exercise a little patience and move slower into the direction of feasibility. Therefore, in our approach the search directions are designed in such a way that a full Newton step reduces the sizes of the residual vectors with the same speed as the duality gap. The outcome of the analysis in this paper shows that this is a good strategy. The paper is organized as follows. As a preparation to the rest of the paper, in Section we first recall some basic tools in the analysis of a feasible IPM. These tools will be used also in the analysis of the IIPM proposed in this paper. Section 3 is used to describe our algorithm in more detail. One characteristic of the algorithm is that is uses intermediate problems. The intermediate problems are suitable perturbations of the given problems P) andd) so that at any stage the iterates are feasible for the current perturbed problems; the size of the perturbation decreases at the same speed as the barrier parameter µ. When µ changes to a smaller value, the perturbed problem corresponding to µ changes, and hence also the current central path. The algorithm keeps the iterates close to the µ-center on the central path of the current perturbed problem. To get the iterates feasible for the new perturbed problem and close to its central path, we use a so-called feasibility step. The largest, and hardest, part of the analysis, which is presented in Section 4, concerns this step. It turns out that to keep control over this step, before taking the step the iterates need to be very well centered. Some concluding remarks can be found in Section 5. Some notations used throughout the paper are as follows. denotes the -norm of a vector. For any x = x 1, x,, x n ) T R n, minx) denotes the smallest and maxx) the largest value of the components of x. If x, s R n, then xs denotes the componentwise or Hadamard) product of the vectors x and s. Furthermore, e denotes the all-one vector of length n. If z R n + and f : R + R +, then f z) denotes the vector in R n + whose i-th component is f z i ), with 1 i n. We write fx) = Ogx)) if fx) γ gx) for some positive constant γ. Feasible full-newton step IPMs In preparation of dealing with infeasible IPMs, in this section we briefly recall the classical way to obtain a polynomial-time path-following feasible IPM for solving P) and D). To solve these problems one needs to find a solution of the following system of equations. Ax = b, x 0 A T y + s = c, s 0 xs = 0. 3

4 In these so-called optimality conditions the first two constraints represent primal and dual feasibility, whereas the last equation is the so-called complementary condition. The nonnegativity constraints in the feasibility conditions make the problem already nontrivial: only iterative methods can find solutions of linear systems involving inequality constraints. The complementary condition is nonlinear, which makes it extra hard to solve this system..1 Central path IPMs replace the complementarity condition by the so-called centering condition xs = µe, where µ may be any positive number. This yields the system Ax = b, x 0 A T y + s = c, s 0 xs = µe. 3) Surprisingly enough, if this system has a solution, for some µ > 0, then a solution exists for every µ > 0, and this solution is unique. This happens if and only if P) and D) satisfy the interior-point condition IPC), i.e., if P) has a feasible solution x > 0 and D) has a solution y, s) with s > 0 see, e.g., [16]). If the IPC is satisfied, then the solution of 3) is denoted as xµ), yµ), sµ)), and called the µ-center of P) and D). The set of all µ-centers is called the central path of P) and D). As µ goes to zero, xµ), yµ) and sµ) converge to optimal solution of P) and D). Of course, system 3) is still hard to solve, but by applying Newton s method one can easily find approximate solutions.. Definition and properties of the Newton step We proceed by describing Newton s method for solving 3), with µ fixed. Given any primal feasible x > 0, and dual feasible y and s > 0, we want to find displacements x, y and s such that Ax + x) = b, A T y + y) + s + s = c, x + x)s + s) = µe. Neglecting the quadratic term x s in the third equation, we obtain the following linear system of equations in the search directions x, y and s. A x = b Ax, 4) A T y + s = c A T y s, 5) s x + x s = µe xs. 6) Since A has full rank, and the vectors x and s are positive, one may easily verify that the coefficient matrix in the linear system 4) 6) is nonsingular. Hence this system uniquely defines the search directions x, y and s. These search directions are used in all existing primal-dual feasible and infeasible) IPMs and called after Newton. 4

5 If x is primal feasible and y, s) dual feasible, then b Ax = 0 and c A T y s = 0, whence the above system reduces to A x = 0, 7) A T y + s = 0, 8) s x + x s = µe xs, 9) which gives the usual search directions for feasible primal-dual IPMs. The new iterates are given by x + = x + x, y + = y + y, s + = s + s. An important observation is that x lies in the null space of A, whereas s belongs to the row space of A. This implies that x and s are orthogonal, i.e., x) T s = 0. As a consequence we have the important property that after a full Newton step the duality gap assumes the same value as at the µ-centers, namely nµ. Lemma.1 Lemma II.46 in [16]) After a primal-dual Newton step one has x + ) T s + = nµ. Assuming that a primal feasible x 0 > 0 and a dual feasible pair y 0, s 0 ) with s 0 > 0 are given that are close to xµ) and yµ), sµ)) for some µ = µ 0, one can find an ε-solution in O n logn/ε)) iterations of the algorithm in Figure 1. In this algorithm δx, s; µ) is a quantity that measures proximity of the feasible triple x, y, s) to the µ-center xµ), yµ), sµ)). Following [16], this quantity is defined as follows δx, s; µ) := δv) := 1 v v 1 xs where v := µ. 10) The following two lemmas are crucial in the analysis of the algorithm. We recall them without proof. The first lemma describes the effect on δx, s; µ) of a µ-update, and the second lemma the effect of a Newton step. Lemma. Lemma II.53 in [16]) Let x, s) be a positive primal-dual pair and µ > 0 such that x T s = nµ. Moreover, let δ := δx, s; µ) and let µ + = 1 θ)µ. Then δx, s; µ + ) = 1 θ)δ + θ n 41 θ). Lemma.3 Theorem II.49 in [16]) If δ := δx, s; µ) 1, then the primal-dual Newton step is feasible, i.e., x + and s + are nonnegative. Moreover, if δ < 1, then x + and s + are positive and δx +, s + ; µ) δ 1 δ ). Corollary.4 If δ := δx, s; µ) 1, then δx +, s + ; µ) δ. 5

6 Primal-Dual Feasible IPM Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1; feasible x 0, y 0, s 0 ) with x 0) T s 0 = nµ 0, δx 0, s 0, µ 0 ) 1/. begin x := x 0 ; y := y 0 ; s := s 0 ; µ := µ 0 ; while x T s ε do begin µ-update: end end µ := 1 θ)µ; centering step: x, y, s) := x, y, s) + x, y, s); Figure 1: Feasible full-newton-step algorithm.3 Complexity analysis We have the following theorem, whose simple proof we include, because it slightly improves the complexity result in [16, Theorem II.5]. Theorem.5 If θ = 1/ n, then the algorithm requires at most n log nµ 0 iterations. The output is a primal-dual pair x, s) such that x T s ε. Proof: At the start of the algorithm we have δx, s; µ) 1/. After the update of the barrier parameter to µ + = 1 θ)µ, with θ = 1/ n, we have, by Lemma., the following upper bound for δx, s; µ + ): δx, s; µ + ) 1 θ 4 + ε 1 81 θ) 3 8. Assuming n, the last inequality follows since the expression at the left is a convex function of θ, whose value is 3/8 both in θ = 0 and θ = 1/. Since θ [0, 1/], the left hand side does not exceed 3/8. Since 3/8 < 1/, we obtain δx, s; µ + ) 1/. After the primal-dual Newton step to the µ + -center we have, by Corollary.4, δx +, s + ; µ + ) 1/. Also, from Lemma.1, x + ) T s + = nµ +. Thus, after each iteration of the algorithm the properties x T s = nµ, δx, s; µ) 1 6

7 are maintained, and hence the algorithm is well defined. The iteration bound in the theorem now easily follows from the fact that in each iteration the value of x T s is reduced by the factor 1 θ. This proves the theorem. 3 Infeasible full-newton step IPM We are now ready to present an infeasible start algorithm that generates an ε-solution of P) and D), if it exists, or establishes that no such solution exists. 3.1 Perturbed problems We start with choosing arbitrarily x 0 > 0 and y 0, s 0 > 0 such that x 0 s 0 = µ 0 e for some positive) number µ 0. For any ν with 0 < ν 1 we consider the perturbed problem P ν ), defined by { c P ν ) min ν c A T y 0 s 0)) T x : Ax = b ν b Ax 0 ) }, x 0, and its dual problem D ν ), which is given by { b D ν ) max ν b Ax 0 )) T y : A T y + s = c ν c A T y 0 s 0) }, s 0. Note that if ν = 1 then x = x 0 yields a strictly feasible solution of P ν ), and y, s) = y 0, s 0 ) a strictly feasible solution of D ν ). We conclude that if ν = 1 then P ν ) and D ν ) satisfy the IPC. Lemma 3.1 cf. Theorem 5.13 in [8]) The original problems, P) and D), are feasible if and only if for each ν satisfying 0 < ν 1 the perturbed problems P ν ) and D ν ) satisfy the IPC. Proof: Suppose that P) and D) are feasible. Let x be feasible solution of P) and ȳ, s) a feasible solution of D). Then A x = b and A T ȳ + s = c, with x 0 and s 0. Now let 0 < ν 1, and consider One has x = 1 ν) x + ν x 0, y = 1 ν) ȳ + ν y 0, s = 1 ν) s + ν s 0. Ax = A 1 ν) x + ν x 0) = 1 ν)a x + νax 0 = 1 ν)b + νax 0 = b ν b Ax 0), showing that x is feasible for P ν ). Similarly, A T y + s = 1 ν) A T ȳ + s ) + ν A T y 0 + s 0) = 1 ν)c + ν A T y 0 + s 0) = c ν c A T y 0 s 0), showing that y, s) is feasible for D ν ). Since ν > 0, x and s are positive, thus proving that P ν ) and D ν ) satisfy the IPC. To prove the inverse implication, suppose that P ν ) and D ν ) satisfy the IPC for each ν satisfying 0 < ν 1. Obviously, then P ν ) and D ν ) are feasible for these values of ν. Letting ν go to zero it follows that P) and D) are feasible. 7

8 3. Central path of the perturbed problems We assume that P) and D) are feasible. Letting 0 < ν 1, Lemma 3.1 implies that the problems P ν ) and D ν ) satisfy the IPC, and hence their central paths exist. This means that the system b Ax = νb Ax 0 ), x 0 11) c A T y s = νc A T y 0 s 0 ), s 0. 1) xs = µe. 13) has a unique solution, for every µ > 0. In the sequel this unique solution is denoted as xµ, ν), yµ, ν), sµ, ν)). These are the µ-centers of the perturbed problems P ν ) and D ν ). Note that since x 0 s 0 = µ 0 e, x 0 is the µ 0 -center of the perturbed problem P 1 ) and y 0, s 0 ) the µ 0 -center of D 1 ). In other words, xµ 0, 1), yµ 0, 1), sµ 0, 1)) = x 0, y 0, s 0 ). In the sequel we will always have µ = ν µ Idea underlying the algorithm We just established that if ν = 1 and µ = µ 0, then x = x 0 is the µ-center of the perturbed problem P ν ) and y, s) = y 0, s 0 ) the µ-center of D ν ). These are our initial iterates. We measure proximity to the µ-center of the perturbed problems by the quantity δx, s; µ) as defined in 10). Initially we thus have δx, s; µ) = 0. In the sequel we assume that at the start of each iteration, just before the µ-update, δx, s; µ) is smaller than or equal to a small) threshold value τ > 0. So this is certainly true at the start of the first iteration. Now we describe one iteration of our algorithm. Suppose that for some µ 0, µ 0 ] we have x, y and s satisfying the feasibility conditions 11) and 1) for ν = µ/µ 0, and such that x T s = nµ and δx, s; µ) τ. Roughly spoken, one iteration of the algorithm can be described as follows: reduce µ to µ + = 1 θ)µ, with θ 0, 1), and find new iterates x +, y + and s + that satisfy 11) and 1), with µ replaced by µ + and ν by ν + = µ + /µ 0, and such that x T s = nµ + and δx +, s + ; µ + ) τ. Note that ν + = 1 θ)ν. To be more precise, this will be achieved as follows. Our first aim is to get iterates x, y, s) that are feasible for ν + = µ + /µ 0, and close to the µ-centers xµ, ν + ), yµ, ν + ), sµ, ν + )). If we can do this in a such a way that the new iterates satisfy δx, s; µ + ) 1/, then we can easily get iterates x, y, s) that are feasible for P ν +) and D ν +) and such that δx, s; µ + ) τ by performing Newton steps targeting at the µ + centers of P ν +) and D ν +). Before proceeding it will be convenient to introduce some new notations. We denote the initial values of the primal and dual residuals as r 0 b and r0 c, respectively, as r 0 b = b Ax0, r 0 c = c A T y 0 s 0. Then the feasibility equations for P ν ) and D ν ) are given by Ax = b νr 0 b, x 0 14) A T y + s = c νr 0 c, s 0. 15) 8

9 and those of P ν +) and D ν +) by Ax = b ν + r 0 b, x 0 16) A T y + s = c ν + r 0 c, s 0. 17) To get iterates that are feasible for P ν +) and D ν +) we need search directions f x, f y and f s such that Ax + f x) = b ν + r 0 b A T y + f y) + s + f s) = c ν + r 0 c. Since x is feasible for P ν ) and y, s) is feasible for D ν ), it follows that f x, f y and f s should satisfy A f x = b Ax) ν + r 0 b = νr0 b ν+ r 0 b = θνr0 b A T f y + f s = c A T y s) ν + r 0 c = νr 0 c ν + r 0 c = θνr 0 c. Therefore, the following system is used to define f x, f y and f s: and the new iterates are given by A f x = θνr 0 b 18) A T f y + f s = θνr 0 c 19) s f x + x f s = µe xs, 0) x + = x + f x, 1) y + = y + f y, ) s + = s + f s. 3) We conclude that after the feasibility step the iterates satisfy the affine equations in 11) and 1), with ν = ν +. The hard part in the analysis will be to guarantee that x + and s + are positive after the feasibility step and satisfy δx +, s + ; µ + ) 1/. After the feasibility step we perform centering steps in order to get iterates that moreover satisfy x T s = nµ + and δx, s; µ + ) τ. By using Corollary.4, the required number of centering steps can be easily be obtained. Because, assuming δ = δx +, s + ; µ + ) 1/, after k centering steps we will have iterates x, y, s) that are still feasible for P ν +) and D ν +) and such that δx, s; µ + ) 1 ) k. So, k should satisfy which gives 1 ) k τ, ) 1 k log log τ. 4) 9

10 Primal-Dual Infeasible IPM Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1 threshold parameter τ > 0. begin x := x 0 > 0; y := y 0 ; s := s 0 > 0; x 0 s 0 = µ 0 e; µ = µ 0 ; while max x T s, b Ax, c A T y s ) ε do begin feasibility step: end end x, y, s) := x, y, s) + f x, f y, f s); µ-update: µ := 1 θ)µ; centering steps: while δx, s; µ) τ do x, y, s) := x, y, s) + x, y, s); endwhile Figure : Algorithm 3.4 Algorithm A more formal description of the algorithm is given in Figure. Note that after each iteration the residuals and the duality gap are reduced by a factor 1 θ. The algorithm stops if the norms of the residuals and the duality gap are less than the accuracy parameter ε. 4 Analysis of the algorithm Let x, y and s denote the iterates at the start of an iteration, and assume δx, s; µ) τ. Recall that at the start of the first iteration this is certainly true, because then δx, s; µ) = Effect of the feasibility step; choice of θ As we established in Section 3.3, the feasibility step generates new iterates x +, y + and s + that satisfy the feasibility conditions for P ν +) and D ν +). A crucial element in the analysis is to show that after the feasibility step δx +, s + ; µ + ) 1/, i.e., that the new iterates are within the 10

11 region where the Newton process targeting at the µ + -centers of P ν +) and D ν +) is quadratically convergent. Defining xs v = µ, d x := v f x x, d s := v f s, 5) s we have, using 0) and 5), ) x + s + = xs + s f x + x f s + f x f s = µe + f x f s = µe + xs v d xd s = µe + d x d s ). 6) Lemma 4.1 The new iterates are certainly strictly feasible if d x d s < 1. Proof: Note that if x + and s + are positive then 6) makes clear that e + d x d s > 0. By Lemma II.45 in [16] the converse is also true. Thus we have that x + and s + are positive if and only if e + d x d s > 0. Since the last inequality holds if d x d s < 1, the lemma follows. In the sequel we denote ωv) := 1 d x + d s. 7) This implies d x ωv) and d s ωv), and moreover, d T x d s d x d s 1 d x + d s ) ωv) 8) d xi d si d x d s ωv), 1 i n. 9) Lemma 4. If ωv) < 1/ then the new iterates are strictly feasible. Proof: This is an immediate consequence of 9) and Lemma 4.1. Lemma 4.3 One has 4δv + ) θ n 1 θ + ωv) 1 θ + 1 θ) ωv) 1 ωv). Proof: By definition 10), δx +, s + ; µ + ) = δv + ) = 1 Using 6), after division of both sides by µ + we get v + e, where v + x v + = + s + µ +. v + ) = µe + d x d s ) µ + = e + d xd s 1 θ. Defining for the moment u = e + d x d s, we have v + = u/ 1 θ. Hence we may write δv + ) = u 1 θ u 1 = θu 1 θ + 1 θ u u 1). 1 θ 11

12 Therefore we have 4δv + ) = θ 1 θ u + 1 θ) u u 1 + θu T u u 1) ) θ = 1 θ + θ u + 1 θ) u u 1 θu T u 1 ) θ = 1 θ + θ e T e + d x d s ) + 1 θ) u u 1 θn ) θ n = 1 θ + θ ) + d T x d s + 1 θ) u u 1 θn ) = θ n θ 1 θ + 1 θ + θ d T x d s + 1 θ) u 1 u. The last term can be reduced as follows. u 1 u ) = e e T e + d x d s + e e + d x d s = d T x d s + = d T x d s + ) 1 1 = d T x d s 1 + d xi d si Substitution gives ) 4δv + ) = θ n θ 1 θ + 1 θ + θ d T x d s + 1 θ) d T x d s = θ n 1 θ + dt x d s 1 θ 1 θ) d xi d si. 1 + d xi d si Hence, using 8) and 9), we arrive at 4δv + ) which completes the proof d xi d si n d xi d si 1 + d xi d si. θ n 1 θ + ωv) 1 θ + 1 θ) d xi d si 1 ωv) θ n 1 θ + ωv) 1 θ + 1 θ d x i + d s i 1 ωv) = θ n 1 θ + ωv) 1 θ + 1 θ) ωv) 1 ωv), d xi d si 1 + d xi d si ) We conclude this section by presenting a value that we not allow ωv) to exceed. Because we need to have δv + ) 1/, it follows from Lemma 4.3 that it suffices if θ n 1 θ + ωv) 1 θ + 1 θ) ωv) 1 ωv). At this stage we decide to choose θ = α n, α 1. 30) 1

13 Then, assuming n, one may easily verify that ωv) 1 δv + ) 1. 31) Note that if ωv) 1 then the iterates are certainly feasible after the feasibility step, due to Lemma 4.. We proceed by considering the vectors d x and d s more in detail. 4. The scaled search directions d x and d s One may easily check that the system 18) 0), which defines the search directions f x, f y and f s, can be expressed in terms of the scaled search directions d x and d s as follows. Ād x = θνr 0 b, 3) Ā T f y µ + d s = θνvs 1 r 0 c, 33) d x + d s = v 1 v, 34) where Ā = AV 1 X. 35) If r 0 b and r0 c are zero, i.e., if the initial iterates are feasible, then d x and d s are orthogonal vectors, since then the vector d x belongs to the null space and d s to the row space of the matrix Ā. It follows that d x and d s form an orthogonal decomposition of the vector v 1 v. As a consequence we then have obvious upper bounds for the norms of d x and d s, namely d x δv) and d s δv), and moreover, ωv) = δv), with ωv) as defined in 7). In the infeasible case orthogonality of d x and d s may not be assumed, however, and the situation may be quite different. This is illustrated in Figure 3. As a consequence, it becomes much harder to get upper bounds for d x and d s, thus complicating the analysis of the algorithm in comparison with feasible IPMs. To obtain an upper bound for ωv) is the subject of several sections to follow. 4.3 Upper bound for ωv) Let us denote the null space of the matrix Ā as L. So, L := { ξ R n : Āξ = 0 }. Obviously, the affine space { ξ R n : Āξ = θνr 0 b} equals dx + L. The row space of Ā equals the orthogonal complement L of L, and d s θνvs 1 r 0 c + L. Lemma 4.4 Let q be the unique) point in the intersection of the affine spaces d x + L and d s + L. Then ωv) q + q + δv)). 13

14 d x 0 90 o ωv) { ξ : Āξ = θνr 0 b} q δv) θνvs 1 r 0 c + { Ā T η : η R m} r d s Figure 3: Geometric interpretation of ωv). Proof: To simplify the notation in this proof we denote r = v 1 v. Since L + L = R n, there exist q 1, r 1 L and q, r L such that q = q 1 + q, r = r 1 + r. On the other hand, since d x q L and d s q L there must exists l 1 L and l L such that d x = q + l 1, d s = q + l. Due to 34) it follows that r = q + l 1 + l, which implies from which we conclude that Hence we obtain q 1 + l 1 ) + q + l ) = r 1 + r, l 1 = r 1 q 1, l = r q. d x = q + r 1 q 1 = r 1 q 1 ) + q d s = q + r q = q 1 + r q ). Since the spaces L and L are orthogonal we conclude from this that 4ωv) = d x + d s = r 1 q 1 + q + q 1 + r q = q r + q. 1 14

15 Assuming q 0, since r = δv) the right hand side is maximal if r = δv)q/ q, and thus we obtain 4ωv) 1 + δv) ) q q + q = q + q + δv)), which implies the inequality in the lemma if q 0. Since the inequality in the lemma holds with equality if q = 0, this completes the proof. In the sequel we denote δv) simply as δ. Recall from 31) that in order to guarantee that δv + ) 1 we want to have ωv) 1. Due to Lemma 4.4 this will certainly hold if q satisfies q + q + δ) 1. 36) 4.4 Upper bound for q Recall from Lemma 4.4 that q is the unique) solution of the system Āq = θνrb 0, Ā T ξ + q = θνvs 1 rc. 0 We proceed by deriving an upper bound for q. We have the following result. Lemma 4.5 One has µ q θν ζ e T x s + s x ). 37) Proof: From the definition 35) of Ā we deduce that Ā = µad, where D = diag ) ) xv 1 x = diag = diag µvs 1 ). µ s For the moment, let us write r b = θνr 0 b, Then the system defining q is equivalent to r c = θνr 0 c µad q = rb, µ DA T ξ + q = 1 µ Dr c. This implies whence ξ = 1 µ µad A T ξ = AD r c r b, AD A T) 1 AD r c r b ). Substitution gives q = 1 µ Dr c DA T AD A T) 1 AD r c r b ) ). 15

16 Observe that q 1 := Dr c DA T AD A T) 1 AD r c = I DA T AD A T) 1 AD ) Dr c is the orthogonal projection of Dr c onto the null space of AD. Let ȳ, s) be such that A T ȳ+ s = c. Then we may write r c = θνr 0 c = θν c A T y 0 s 0) = θν A T ȳ y 0 ) + s s 0). Since DA T ȳ y 0 ) belongs to the row space of AD, which is orthogonal to the null space of AD, we obtain q 1 θν D s s 0). On the other hand, let x be such that A x = b. Then and the vector r b = θνr 0 b = θνb Ax0 ) = θνa x x 0 ), q := DA T AD A T) 1 rb = θνda T AD A T) 1 AD D 1 x x 0)) is the orthogonal projection of θνd 1 x x 0) onto the row space of AD. Hence it follows that q θν D 1 x x 0). Since µq = q 1 + q and q 1 and q are orthogonal, we may conclude that µ q = q 1 + q θν D s s 0 ) + D 1 x x 0 ), 38) where, as always, µ = µ 0 ν. We are still free to choose x and s, subject to the constraints A x = b and A T ȳ + s = c. Let x be an optimal solution of P) and ȳ, s) of D) assuming that these exist). Then x and s are complementary, i.e., x s = 0. Let ζ be such that x + s ζ. Starting the algorithm with x 0 = s 0 = ζe, y 0 = 0, µ 0 = ζ, the entries of the vectors x 0 x and s 0 s satisfy 0 x 0 x ζe, 0 s 0 s ζe. Thus it follows that D s s 0 ) + D 1 x x 0 ) ζ De + D 1 e x = ζ e T s + s ). x Substitution into 38) gives µ q θν ζ e T x s + s x ), proving the lemma. To proceed we need upper and lower bounds for the elements of the vectors x and s. 16

17 4.5 Bounds for x and s; choice of τ and α Recall that x is feasible for P ν ) and y, s) for D ν ) and, moreover δx, s; µ) τ, i.e., these iterates are close to the µ-centers of P ν ) and D ν ). Based on this information we need to estimate the sizes of the entries of the vectors x/s and s/x. For this we use Theorem A.7, which gives x s 1 + τ χτ ) s x 1 + τ χτ ) xµ, ν) µ sµ, ν) µ, where τ := ψ ρτ)), ψt) = 1 t 1 ) log t, and where χ : [0, ) 0, 1] is the inverse function of ψt) for 0 < t 1, as defined in 43). We choose τ = ) Then τ = , 1 + τ = and χτ ) = , whence 1 + τ χτ ) = <. It follows that Substitution into 37) gives This implies x s xµ, ν), µ µ q θν ζ e T s x sµ, ν). µ xµ, ν) µ + ) sµ, ν). µ µ q θνζ xµ, ν) + sµ, ν). Therefore, also using µ = µ 0 ν = ζ ν and θ = α n, we obtain the following upper bound for the norm of q: q α ζ xµ, ν) + sµ, ν). n We define κζ, ν) = xµ, ν) + sµ, ν) ζ, n 0 < ν 1, µ = µ 0 ν. Note that since xζ, 1) = sζ, 1) = ζe, we have κζ,1) = 1. Now we may write q α κζ) where κζ) = max 0<ν 1 κζ, ν). 17

18 We found in 36) that in order to have δv + ) 1/, we should have q + q + δ) 1. Therefore, since δ τ = 1 8, it suffices if q satisfies q + q + 1 4) 1. This holds if and only if q Since /3 = , we conclude that if we take then we will certainly have δv + ) 1. α = 1 3 κζ), 40) There is some computational evidence that if ζ is large enough, then κζ) = 1. This requires further investigation. We can prove that κζ) n. This is shown in the next section. 4.6 Bound for κζ) Due to the choice of the vectors x, ȳ, s and the number ζ cf. Section 4.4) we have A x = b, A T ȳ + s = c, x s = 0. 0 x ζe 0 s ζe To simplify notation in the rest of this section, we denote x = xµ, ν), y = yµ, ν) and s = sµ, ν). Then x, y and s are uniquely determined by the system Hence we have We rewrite this system as b Ax = νb Aζe), x 0 c A T y s = νc ζe), s 0 xs = νζ e. A x Ax = νa x Aζe), x 0 A T ȳ + s A T y s = νa T ȳ + s ζe), s 0 xs = νζ e. A x x ν x + νζe) = 0, x 0 A T ȳ y νȳ) = s s + ν s νζe, s 0 xs = νζ e. Using again that the row space of a matrix and its null space are orthogonal, we obtain x x ν x + νζe) T s s + ν s νζe) = 0. Defining a := 1 ν) x + νζe, b := 1 ν) s + νζe, we have a x) T b s) = 0. This gives a T b + x T s = a T s + b T x. 18

19 Since x T s = 0, x + s ζe and xs = νζ e, we may write a T b + x T s = 1 ν) x + νζe) T 1 ν) s + νζe) + νζ n Moreover, also using a νζe, b νζe, we get = ν1 ν) x + s) T ζe + ν ζ n + νζ n ν1 ν)ζe) T ζe + ν ζ n + νζ n = ν1 ν)ζ n + ν ζ n + νζ n = νζ n. a T s + b T x = 1 ν) x + νζe) T s + 1 ν) s + νζe) T x = 1 ν) s T x + x T s ) + νζe T x + s) νζe T x + s) = νζ s 1 + x 1 ). Hence we obtain s 1 + x 1 ζn. Since x + s s 1 + x 1 ), it follows that x + s ζ n s 1 + x 1 ζ n thus proving κζ) n. ζn ζ n = n, 4.7 Complexity analysis In the previous sections we have found that if at the start of an iteration the iterates satisfy δx, s; µ) τ, with τ as defined in 39), then after the feasibility step, with θ as defined in 30), and α as in 40), the iterates satisfy δx, s; µ + ) 1/. According to 4), at most ) 1 log log τ = log log 64) = 3 centering steps suffice to get iterates that satisfy δx, s; µ + ) τ. So each iteration consists of at most 4 so-called inner iterations, in each of which we need to compute a new search direction. It has become a custom to measure the complexity of an IPM by the required number of inner iterations. In each iteration both the duality gap and the norms of the residual vectors are reduced by the factor 1 θ. Hence, using x 0 T s 0 = nζ, the total number of iterations is bounded above by Since 1 θ { max nζ, r 0 log b ε θ = α n =, rc 0 }. 1 3 κζ) n, the total number of inner iterations is therefore bounded above by 1 κζ) n log max { nζ, rb 0, rc 0 }, ε where κζ) n. 19

20 5 Concluding remarks The current paper shows that the techniques that have been developed in the field of feasible full-newton step IPMs, and which have now been known for almost twenty years, are sufficient to get a full-newton step IIPM whose performance is at least as good as the currently best known performance of IIPMs. Following a well-known metaphor of Isaac Newton 4, it looks like if a smooth pebble or pretty shell on the sea-shore of IPMs has been overlooked for a surprisingly long time. It may be clear that the full-newton step method presented in this paper is not efficient from a practical point of view, just as the feasible IPMs with the best theoretical performance are far from practical. But just as in the case of feasible IPMs one might expect that computationally efficient large-update methods for IIPMs can be designed whose theoretical complexity is not worse than n times the iteration bound in this paper. Even better results for large-update methods might be obtained by changing the search direction, by using methods that are based on kernel functions, as presented in [3, 13]. This requires further investigation. Also extensions to second-order cone optimization, semidefinite optimization, linear complementarity problems, etc. seem to be within reach. Acknowledgements Hossein Mansouri, PhD student in Delft, forced the author to become more familiar with IIPMs when, in September 004, he showed him a draft of a paper on large-update IIPMs based on kernel functions. Thank you, Hossein! I am also much indebted to Jean-Philippe Vial. He found a serious flaw in an earlier version of this paper. Thanks are also due Kurt Anstreicher, Florian Potra, Shinji Mizuno and other colleagues for some useful discussions during the Oberwolfach meeting Optimization and Applications in January 005. I also thank Yanqin Bai for pointing out some typos in an earlier version of the paper. References [1] E. D. Andersen and Y. Ye. A computational study of the homogeneous algorithm for large-scale convex optimization. Computational Applications and Optimization, 10:43 69, [] E.D. Andersen and K.D. Andersen. The MOSEK interior-point optimizer for Linear Programming: an implementation of the homogeneous algorithm. In S. Zhang, H. Frenk, C. Roos, and T. Terlaky, editors, High Performence Optimization Techniques, pages Kluwer Academic Publishers, Dordrecht, The Netherlands, [3] Y.Q. Bai, M. El Ghami, and C. Roos. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM Journal on Optimization, 151): electronic), 004. [4] J.F. Bonnans and F.A. Potra. On the convergence of the iteration sequence of infeasible path following algorithms for linear complementarity problems. Mathematics of Operations Research, ): , I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me [3, p. 863]. 0

21 [5] R.M. Freund. A potential-function reduction algorithm for solving a linear program directly from an infeasible warm start. Mathematical Programming, 5: , [6] N. Karmarkar. New polynomial-time algorithm for linear programming. Combinattorica 4, pages , [7] M. Kojima, N. Megiddo, and S.Mizuno. A primal-dual infeasible-interior-point algorithm for linear programming. Mathematical Programming 61, pages 63 80, [8] M. Kojima, T. Noma, and A. Yoshise. Global convergence in infeasible-interior-point algorithms. Mathematical Programming, Series A, 65:43 7, [9] I. J. Lustig. Feasible issues in a primal-dual interior-point method. Mathematical Programming 67, pages , 1990/91. [10] S. Mizuno. Polynomiality of infeasible-interior-point algorithms for linear programming. Mathematical Programming 67, pages , [11] S. Mizuno, M.J. Todd, and Y. Ye. A surface of analytic centers and infeasible- interior-point algorithms for linear programming. Mathematics of Operations Research, 0:135 16, [1] Yu. Nesterov, M. J. Todd, and Y. Ye. Infeasible-start primal-dual methods and infeasibility detectors for nonlinear programming problems. Mathematical Programming, 84, Ser. A):7 67, [13] J. Peng, C. Roos, and T. Terlaky. Self-Regularity. A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, 00. [14] F. A. Potra. A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points. Mathematical Programming, 673, Ser. A): , [15] F. A. Potra. An infeasible-interior-point predictor-corrector algorithm for linear programming. SIAM Journal on Optimization, 61):19 3, [16] C. Roos, T. Terlaky, and J.-Ph. Vial. Theory and Algorithms for Linear Optimization. An Interior- Point Approach. John Wiley & Sons, Chichester, UK, [17] M. Salahi, T. Terlaky, and G. Zhang. The complexity of self-regular proximity based infeasible IPMs. Technical Report 003/3, Advanced Optimization Laboratory, Mc Master University, Hamilton, Ontario, Canada, 003. [18] R. Sheng and F. A. Potra. A quadratically convergent infeasible-interior-point algorithm for LCP with polynomial complexity. SIAM Journal on Optimization, 7): , [19] J.F. Sturm. Using SeDuMi 1.0, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods & Software, 11:65 653, [0] K. Tanabe. Centered Newton method for linear programming: Interior and exterior point method in Janpanese). In: New Methods for Linear Programming. K. Tone Ed.) 3, pages , [1] M. J. Todd and Y. Ye. Approximate Farkas lemmas and stopping rules for iterative infeasible-point algorithms for linear programming. Mathematical Programming, 811, Ser. A):1 1, [] R.J. Vanderbei. Linear Programming: Foundations and Extensions. Kluwer Academic Publishers, Boston, USA, [3] R. S. Westfall. Never at Rest. A biography of Isaac Newton. Cambridge University Press, Cambridge, UK, [4] S. J. Wright. An infeasible-interior-point algorithm for linear complementarity problems. Mathematical Programming, 671):9 5, [5] S.J. Wright. A path-following infeasible-interior-point algorithm for linear complementarity problems. Optimization Methods & Software, :79 106,

22 [6] S.J. Wright. Primal-Dual Interior-Point Methods. SIAM, Philadelphia, [7] F. Wu, S. Wu, and Y. Ye. On quadratic convergence of the O nl)-iteration homogeneous and self-dual linear programming algorithm. Annals of Operations Research, 87: , [8] Y. Ye. Interior Point Algorithms, Theory and Analysis. John Wiley & Sons, Chichester, UK, [9] Y. Ye, M.J. Todd, and S. Mizuno. An O nl)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of Operations Research, 19:53 67, [30] Y. Zhang. On the convergence of a class of infeasible-interior-point methods for the horizantal linear complementarity problem. SIAM Journal on Optimization, 4:08 7, 1994.

23 A Some technical lemmas Given a strictly primal feasible solution x of P) and a strictly dual feasible solution y,s) of D), and µ > 0, let xi s i Φxs;µ) := Ψv) := ψv i ), v i := µ, ψt) := 1 t 1 log t ). It is well known that ψt) is the kernel function of the primal-dual logarithmic barrier function, which, up to some constant, is the function Φxs;µ) see, e.g., [3]). Lemma A.1 One has Φxs;µ) = Φxsµ);µ) + Φxµ)s;µ). Proof: The equality in the lemma is equivalent to xi s i µ 1 log x ) is i xi µ)s i = µ µ Since 1 log x iµ)s i µ ) + xi s i µ) log x is i µ = log x i s i x i µ)s i µ) = log x i x i µ) + log s i s i µ) = log x is i µ) µ the lemma holds if and only if x T s nµ = x T sµ) nµ ) + s T xµ) nµ ). µ 1 log x ) is i µ). µ + log x iµ)s i, µ Using that xµ)sµ), whence xµ) T sµ) = nµ, this can be written as x xµ)) T s sµ)) = 0, which holds if the vectors x xµ) and s sµ) are orthogonal. This is indeed the case, because x xµ) belongs to the null space and s sµ) to the row space of A. This proves he lemma. Theorem A. Let δv) be as defined in 10) and ρδ) as in 41). Then Ψv) ψ ρδv))). Proof: The statement in the lemma is obvious if v = e since then δv) = Ψv) = 0 and since ρ0) = 1 and ψ1) = 0. Otherwise we have δv) > 0 and Ψv) > 0. To deal with the nontrivial case we consider, for τ > 0, the problem { z τ = max v Ψv) = The first order optimality conditions are ψv i ) : δv) = 1 4 } ψ v i ) = τ. ψ v i ) = λψ v i )ψ v i ), i = 1,..., n, where λ R. From this we conclude that we have either ψ v i ) = 0 or λψ v i ) = 1, for each i. Since ψ t) is monotonically decreasing, this implies that all v i s for which λψ v i ) = 1 have the same value. Denoting this value as t, and observing that all other coordinates have value 1 since ψ v i ) = 0 for these coordinates), we conclude that for some k, and after reordering the coordinates, v has the form v = t,..., t, 1,..., 1). }{{}}{{} k times n k times Since ψ 1) = 0, δv) = τ implies kψ t) = 4τ. Since ψ t) = t 1/t, it follows that t 1 t = ± τ k, 3

24 which gives t = ρτ/ k) or t = 1/ρτ/ k), where ρ : R + [1, ) is defined by ρδ) := δ δ. 41) The first value, which is greater than 1, gives the largest value of ψt). This follows because ) 1 ψt) ψ = 1 t 1t ) t 0, t 1. Since we are maximizing Ψv), we conclude that t = ρτ/ k), whence we have )) τ Ψv) = k ψ ρ k. The question remains which value of k maximizes Ψv). To investigate this we take the derivative of Ψv) with respect to k. To simplify the notation we write Ψv) = k ψ t), t = ρs), s = τ k. The definition of t implies t = s s. This gives t s) = 1 + s, or t 1 = st, whence we have Some straightforward computations now yield s = t 1 t = ψ t). d Ψv) dk = ψ t) s ρs) =: fτ). 1 + s We consider this derivative as a function of τ, as indicated. One may easily verify that fτ) = 0. We proceed by computing the derivative with respect to τ. This gives f τ) = 1 k s 1 + s ) 1 + s This proves that f τ) 0. Since fτ) = 0, it follows that fτ) 0, for each τ 0. Hence we conclude that Ψv) is decreasing in k. So Ψv) is maximal when k = 1, which gives the result in the theorem. Corollary A.3 Let τ 0 and δv) τ. Then Ψv) τ, where τ := ψ ρτ)). 4) Proof: Since ρs) is monotonically increasing in s, and ρs) 1 for all s 0, and, moreover, ψt) is monotonically increasing if t 1, the function ψ ρδ)) is increasing in δ, for δ 0. Thus the result is immediate from Theorem A.. Lemma A.4 Let δv) τ and let τ be as defined in 39). Then ) ) xi ψ τ si, ψ τ, i = 1,..., n. x i µ) s i µ) 4

25 Proof: By Lemma A.1 we have Φxs;µ) = Φxsµ);µ) + Φxµ)s;µ). Since Φxs;µ) is always nonnegative, also Φxsµ);µ) and Φxµ)s;µ) are nonnegative. By Corollary A.3 we have Φxs;µ) τ. Thus it follows that Φxsµ);µ) τ and Φxµ)s;µ) τ. The first of these two inequalities gives ) x i s i µ) Φxsµ);µ) = ψ τ. µ Since ψt) 0, for every t > 0, it follows that ) x i s i µ) ψ τ, i = 1,..., n. µ Due to xµ)sµ) = µe, we have x i s i µ) µ = x is i µ) x i µ)s i µ) = x i x i µ), whence we obtain the first inequality in the lemma. The second inequality follows in the same way. In the sequel we use the inverse function of ψt) for 0 < t 1, which is denoted as χs). So χ : [0, ) 0,1], and we have χs) = t s = ψt), s 0, 0 < t 1. 43) Lemma A.5 For each t > 0 one has χψt)) t 1 + ψt). Proof: Suppose ψt) = s 0. Since ψt) is convex, minimal at t = 1, with ψ1) = 0, and since ψt) goes to infinity both if t goes to zero and if t goes to infinity, there exist precisely two values a and b such that ψa) = ψb) = s, with 0 < a 1 b. Since a 1 we have a = χs) = χψt)), proving the left inequality in the lemma. For b we may write s = 1 b 1 ) log b 1 b 1 ) b 1) = 1 b 1), which implies b 1 + s, proving the right inequality. Corollary A.6 If δv) τ then χτ ) xi x i µ) 1 + τ, χτ ) si s i µ) 1 + τ Proof: This is immediate from Lemma A.5 and Lemma A.4. Theorem A.7 If δv) τ then x s 1 + τ χτ ) s x 1 + τ χτ ) xµ) µ sµ) µ. Proof: Using that x i µ)s i µ) = µ, for each i, Corollary A.6 implies 1 + ) xi τ µ) xi s i χτ ) s i µ) = 1 + τ x i µ) χτ ) s i µ) = 1 + τ χτ ) x i µ) = 1 + τ x i µ) µ χτ, ) µ which implies the first inequality. The second inequality is obtained in the same way. 5

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

An EP theorem for dual linear complementarity problems

An EP theorem for dual linear complementarity problems An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Mathematical Programming manuscript No. will be inserted by the editor) Florian A. Potra Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Fall 2012 Infeasible Full-Newton-Step Interior-Point Method for

More information

ON IMPLEMENTATION OF A SELF-DUAL EMBEDDING METHOD FOR CONVEX PROGRAMMING

ON IMPLEMENTATION OF A SELF-DUAL EMBEDDING METHOD FOR CONVEX PROGRAMMING ON IMPLEMENTATION OF A SELF-DUAL EMBEDDING METHOD FOR CONVEX PROGRAMMING JOHNNY TAK WAI CHENG and SHUZHONG ZHANG Department of Systems Engineering and Engineering Management, The Chinese University of

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10589-005-3911-0 On Two Measures of Problem Instance Complexity and their Correlation with the Performance

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization

Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization M. J. Todd January 16, 2003 Abstract We study interior-point methods for optimization problems in the case of infeasibility

More information

Finding a point in the relative interior of a polyhedron

Finding a point in the relative interior of a polyhedron Report no. NA-07/01 Finding a point in the relative interior of a polyhedron Coralia Cartis Rutherford Appleton Laboratory, Numerical Analysis Group Nicholas I. M. Gould Oxford University, Numerical Analysis

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information