A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Size: px
Start display at page:

Download "A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function"

Transcription

1 A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with full-newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim., 16(4): , 2006). We introduce a kernel function in the algorithm. For p [0, 1), the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(n log n/ε). Keywords: linear programming, infeasible interior-point method, full-newton step, polynomial complexity, kernel function AMS subject classification: 65K05, 90C51 1 Introduction For a comprehensive learning about interior-point methods (IPMs), we refer to Roos et al. [7]. In Roos [8], a full-newton step infeasible interior-point algorithm for linear programming (LP ) was presented and he also proved that the complexity of the algorithm coincides with the best known iteration bound for infeasible College of Science, Hohai University, Nanjing , China. zhyi@hhu.edu.cn School of Mathematics and Computer Science, Nanjing Normal University, Nanjing , China. wysun@njnu.edu.cn 1

2 IPMs. Some extensions which are still based on (LP ) were carried out by Liu and Sun [2], Mansouri and Roos [4]. Recently Peng et al. [6] introduced a new class of primal-dual IPMs based on self-regular proximities. These methods do not use the classic Newton directions. Instead they use a direction that can be characterized as a steepest descent direction (in a scaled space) for a so-called self-regular barrier function. Each such barrier function is determined by a simple univariate self-regular function, called its kernel function. In this paper we introduce a kernel function in the algorithm and the polynomial complexity can be proved in the case of p [0, 1). We are concerned with the (LP ) problem given in the following standard form: and its associated dual problem: (P ) min c T x (D) max b T y s.t. Ax = b, x 0, s.t. A T y + s = c, s 0, where c, x, s R n, b, y R m and A R m n is of full row rank. As usual for infeasible IPMs we assume that the initial iterates (x 0, y 0, s 0 ) are as follows: x 0 = s 0 = ζe, y 0 = 0, µ 0 = ζ 2, where e is the all-one vector of length n, µ 0 is the initial dual gap and ζ > 0 is such that x + s ζ, for some optimal solution (x, y, s ) of (P ) and (D). It is not trivial to find such initial feasible interior point. One method to overcome this difficult is to use the homogeneous self-dual embedding model by introducing artificial variables. The embedding technique was presented first by Ye et al. [9] and described in detail in Part I of Roos et al. [7]. It is generally agreed that the total number of inner iterations required by the algorithm is an appropriate measure for its efficiency and this number is referred to as the iteration complexity of the algorithm. Using (x 0 ) T s 0 = nζ 2, the total number of iterations in the algorithm of Roos [8] is bounded above by 24n log max{nζ2, rb 0, r0 c }, (1.1) ε 2

3 where r 0 b and r0 c is the initial residual vectors: r 0 b = b Ax0, r 0 c = c A T y 0 s 0. Up to a constant factor, the iteration bound (1.1) was first obtained by Mizuno [5] and it is the best known iteration bound for infeasible IPMs. Now we recall the main ideas underlying the algorithm in Roos [8]. For any ν with 0 < ν 1 we consider the perturbed problem (P ν ), defined by (P ν ) min{(c νr 0 b) T x : Ax = b νr 0 b, x 0}, and its dual problem (D ν ), which is given by (D ν ) max{(b νr 0 b) T y : A T y + s = c νr 0 c, s 0}. Note that if ν = 1 then x = x 0 yields a strictly feasible solution of (P ν ), and (y, s) = (y 0, s 0 ) a strictly feasible solution of (D ν ). Due to the choice of the initial iterates we may conclude that if ν = 1 then (P ν ) and (D ν ) each have a strictly feasible solution, which means that both perturbed problems then satisfy the well known interior-point condition (IPC). Lemma 1.1. ([8, Lemma 1.1]) The perturbed problems (P ν ) and (D ν ) satisfy the IPC for each ν (0, 1], if and only if the original problems (P ) and (D) are feasible. Assuming that (P ) and (D) are feasible, it follows from Lemma 1.1 that the problems (P ν ) and (D ν ) satisfy the IPC, for each ν (0, 1]. And then their central paths exist. This means that the system b Ax = νr 0 b, x 0, c A T y s = νr 0 c, s 0, xs = µe has a unique solution for every µ > 0, where xs denotes a Hadamard (componentwise) product of two vectors x and s. If ν (0, 1] and µ = νζ 2 we denote this unique solution in the sequel as (x(ν), y(ν), s(ν)). As a consequence, x(ν) is the µ-center of (P ν ) and (y(ν), s(ν)) the µ-center of (D ν ). Due to this notation we have, by taking ν = 1, (x(1), y(1), s(1)) = (x 0, y 0, s 0 ) = (ζe, 0, ζe). 3

4 We measure proximity of iterates (x, y, s) to the µ-center of the perturbed problems (P ν ) and (D ν ) by the quantity δ(x, s; µ), which is defined as follows: δ(x, s; µ) := δ(v) := 1 xs 2 v v 1, where v := µ. (1.2) Initially we have x = s = ζe and µ = ζ 2, whence v = e and δ(x, s; µ) = 0. In the sequel we assume that at the start of each iteration, δ(x, s; µ) is smaller than or equal to a (small) threshold value τ > 0. So this is certainly true at the start of the first iteration. For the feasibility step in Roos [8] they used search directions f x, f y and f s that are (uniquely) defined by the system A f x = θνr 0 b, (1.3) A T f y + f s = θνr 0 c, (1.4) s f x + x f s = µe xs. (1.5) Now we describe one main iteration of the algorithm in Roos [8]. Suppose that for some ν (0, 1] we have x, y and s satisfying the feasibility conditions (1.3)-(1.4), and such that x T s = nµ and δ(x, s; µ) τ, where µ = νζ 2. Each main iteration consists of a so-called feasibility step, a µ- update, and a few centering steps, respectively. First, we find new iterates x f, y f and s f that satisfy (1.3) and (1.4) with ν replaced by ν +. As we will see, by taking θ small enough this can be realized by one feasibility step, to be described below soon. So, as a result of the feasibility step we obtain iterates that are feasible for (P ν +) and (D ν +). Then we reduce ν to ν + = ()ν, with θ (0, 1), and apply a limited number of centering steps with respect to the µ + -centers of (P ν +) and (D ν +). The centering steps keep the iterates feasible for (P ν +) and (D ν +), their purpose is to get iterates x +, y + and s + such that (x + ) T s + = nµ +, where µ + = ν + ζ 2 and δ(x +, s + ; µ + ) τ. This process is repeated until the duality gap and the norms of the residual vectors are less than some prescribed accuracy parameter ε. Now we give a more formal description of the algorithm in Figure 1. 4

5 Primal-Dual Infeasible IPMs Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1; threshold parameter τ > 0. begin x := ζe; y := 0; s := ζe; ν = 1; while max{x T s, b Ax, c A T y s } ε do begin feasibility step: (x, y, s) := (x, y, s) + ( f x, f y, f s); µ-update: µ := ()µ; centering steps: while (δ(x, s; µ) τ) do (x, y, s) := (x, y, s) + ( x, y, s); end while end end Figure 1: Algorithm It can easily be understood that if (x, y, s) is feasible for the perturbed problems (P ν ) and (D ν ) then after the feasibility step the iterates satisfy the feasibility conditions for (P ν +) and (D ν +), provided that they satisfy the nonnegativity conditions. Assuming that before the step δ(x, s; µ) τ holds, and by taking θ small enough, it can be guaranteed that after the step the iterates x f = x + f x, y f = y + f y, s f = s + f s are nonnegative and moreover δ(x f, s f ; µ + ) 1/ 2, where µ + = ()µ. So, after the µ-update the iterates are feasible for (P ν +) and (D ν +) and µ is such that δ(x f, s f ; µ) 1/ 2. In the centering steps, starting at the iterates (x, y, s) = (x f, y f, s f ) and targeting at the µ-centers, the search directions x, y, s are the usual primal-dual 5

6 Newton directions, (uniquely) defined by A x = 0, A T y + s = 0, s x + x s = µe xs. Denoting the iterates after a centering step as x +, y + following results from Roos et al. [7]. and s +, we recall the Lemma 1.2. If δ := δ(x, s; µ) 1, then the primal-dual Newton step is feasible, i.e., x + and s + are nonnegative, and (x + ) T s + = nµ. Moreover, if δ := δ(x, s; µ) 1/ 2, then δ(x +, s + ; µ) δ 2. The centering steps serve to get iterates that satisfy x T s = nµ + and δ := δ(x, s; µ) τ, where τ is (much) smaller than 1/ 2. By using Lemma 1.2, the required number of centering steps can easily be obtained. Because after the µ-update we have δ = δ(x f, s f ; µ + ) 1/ 2, and hence after k centering steps the iterates (x, y, s) satisfy δ(x, s; µ + ) ( 1 2 ) 2k. From this one easily deduces that no more than log 2 (log 2 1 τ 2 ) (1.6) centering steps are needed. Before we begin to describe the idea of the algorithm, we give the definition of a kernel function. Definition 1.3. A twice differentiable function ψ(t) : (0, ) [0, ) is called a kernel function if ψ(1) = 0, ψ (1) = 0, ψ (t) > 0, t > 0. Defining d f x := v f x x, df s := v f s, (1.7) s with v as defined in (1.2). The system which defines the search directions f x, f y and f s, can be expressed in terms of the scaled search directions 6

7 d f x and d f s as follows: Ād f x = θνr 0 b, Ā T f y µ + df s = θνvs 1 r 0 c, d f x + d f s = v 1 v, where Ā = AV 1 X, V = diag(v), X = diag(x). Note that the right-hand side of the third equation in the system is the negative gradient direction of the logarithmic barrier function Ψ(v) := ψ(v i ), v i = i=1 xi s i µ, whose kernel function is ψ(t) = 1 2 (t2 1) log t. In this paper the feasibility step is a slight modification of the standard Newton direction which is defined by a new system as follows where the kernel function of Φ(v) is Ād f x = θνr 0 b, Ā T f y µ + df s = θνvs 1 r 0 c, d f x + d f s = Φ(v), φ(t) := 1 2 (t2 1) + t1 p 1 p 1. From the definition of a kernel function, for p 0, φ(t) is indeed a kernel function. Here we limit p [0, 1). Since φ (t) = t t p, the third equation in the system can be written as d f x + d f s = v p v. (1.8) We define σ(x, s; µ) := σ(v) := 1 2 Φ(v) = 1 2 v p v. (1.9) 7

8 Note that σ(v) = 0 if and only if v = e, thus σ(v) is also a suitable proximity. The following result shows that the proximity σ(v) is always smaller than the proximity δ(v) for p [0, 1). Specially note that we don t abandon the logarithmic barrier function in the paper though we use a new kernel function to define the feasibility step, thus Corollary A.10 in Roos [8] still holds true. Lemma 1.4. For any t > 0, one has t p t t t 1, p [0, 1) Proof. Since (t t 1 ) 2 (t t p ) 2 = (1 t1 p )(1 + t 1 p 2t 2 ) t 2, for 0 < t 1 or t > 1, the expression is always nonnegative. Hence the lemma follows. The following result will be used in Lemma 1.7. Lemma 1.5. For any t > 0, one has t 1 p 2 t 1 p 2 t 1 t, p [0, 1). Proof. Since (t t 2 ) (t1 p + 1 t 1 p ) = t1 p (t 3 p 1)(t 1+p 1) t 3 p, for 0 < t 1 or t > 1, the expression is always nonnegative. Hence the lemma follows. Lemma 1.6. If v 2 = n, then v 1 p 2 2 n, p [0, 1). Proof. Note e γ = n for any γ R, the proof can easily be obtained by Holder inequality. The following lemma focuses on the behavior of feasible full-newton steps. Lemma 1.7. Let (x, s) be a positive primal-dual pair and µ > 0 such that x T s = nµ. Moreover let δ(v) = δ(x, s; µ) and ṽ := v 1 p 2 /. Then δ(ṽ) 2 ()δ(v) θ2 n 4().

9 Proof. δ(ṽ) 2 = 1 4 v 1 p 2 v 1 p 2 2 = 1 4 (v 1 p 2 v 1 p 1 p 1 2 ) + ( v 2 v 1 p 2 ) 2 = 1 p 4 v 2 v 1 p θ2 v 1 p 2 2 4() 1 1 p 2 θ(v 2 v 1 p 2 ) T v 1 p 2 ()δ(v) 2 + θ2 n 4() where we have used Lemma 1.5 in the inequality, and since v 2 = n we have used Lemma 1.6 in the inequality twice. Lemma 1.8. ([4, Lemma A.1]) For i = 1,..., m, let f i : R + R denote a convex function. Then, for any nonzero vector z R n +, the following inequality ( f i (z i ) 1 z e T j f j (e T z) + ) f i (0) z i=1 j=1 i j holds. 2 Analysis of the feasibility step Let x, y and s denote the iterates at the start of an iteration, and assume that δ(v) τ. Recall that in the first iteration we have δ(v) = Effect of the feasibility step As we established in Section 1, the feasibility step generates new iterates x f, y f and s f that satisfy the feasibility conditions for (P ν +) and (D ν +), except possibly the nonnegativity conditions. A crucial element in the analysis is to show that after the feasibility step δ(x f, s f ; µ + ) 1/ 2, i.e., that the new iterates are positive and within the region where the Newton process targeting at the µ + - centers of (P ν +) and (D ν +) is quadratically convergent. Note that (1.8) can be rewritten as x s + s x = µ p+1 2 (xs) p+1 2 xs. 9

10 Then using xs = µv 2 and f x f s = µd f xd f s, we may write x f s f = xs + (s f x + x f s) + f x f s = µ p+1 2 (xs) p f x f s = µ(v 1 p + d f xd f s ). (2.1) Lemma 2.1. The new iterates are certainly strictly feasible if and only if v 1 p + d f xd f s > 0. Proof. Note that if x f and s f are positive then (2.1) makes clear that v 1 p +d f xd f s > 0. Following Lemma 4.1 in Mansouri and Roos [4], the converse can be proved. Thus the lemma follows. Using (1.7) we may also write x f = x + f x = x + xdf x v = x v (v + df x), (2.2) s f = s + f s = s + sdf s v = s v (v + df s ). (2.3) Lemma 2.2. ([2, Lemma 6]) The new iterates are certainly strictly feasible if where d f x < 1 ρ(δ) and d f s < 1 ρ(δ), (2.4) ρ(δ) := δ + δ The proof of Lemma 2.2 makes clear that the elements of the vector v satisfy In the sequel we denote and This implies 1 ρ(δ) v1 p i ρ(δ), i = 1,..., n. (2.5) ω i := ω i (v) := 1 2 d f xi 2 + d f si 2, ω := ω(v) := (ω 1,..., ω n ). (d f x) T d f s d f x d f s 1 2 ( df x 2 + d f s 2 ) 2ω 2, d f xi df si = df xi df si 1 2 ( df xi 2 + d f si 2 ) 2ω 2 i 2ω 2, 1 i n. 10

11 Lemma 2.3. Assuming v 1 p + d f xd f s > 0, one has Proof. 4δ(v f ) 2 4()δ(v) 2 + θ2 n + 2ω2 + 2()ρ(δ)2 ω 2 1 2ρ(δ)ω 2. hence according to (1.2), (v f ) 2 = µ(v1 p + d f xd f s ) = v1 p + d f xd f s, µ + 4δ(v f ) 2 = = i=1 i=1 i=1 For each i we define the function f i (z i ) = v1 p i + z i ( ) (v f i )2 + (v f i ) 2 2 ( v 1 p i + d f xi df si ( v 1 p i One can easily verify that if v 1 p i z i = 2ωi 2, we can require By using (2.5), this certainly holds if + 2ω 2 i ) + 2 v 1 p i + d f xi df si + ) v 1 p 2. i 2ωi 2 + v 1 p i z i 2, i = 1,..., n. z i > 0 then f i (z i ) is convex in z i. Taking v 1 p i 2ω 2 i > 0. We therefore may use Lemma 1.8 and give 4δ(v f ) 2 j=1 = 1 2ω 2 + i j 2ω 2 < 1 ρ(δ). (2.6) f j (ω j ) 1 2ω 2 j=1 j=1 2ω 2 j ( [ ( (v 1 p 2ωj 2 j + 2ω 2 ( v 1 p i + v 1 p 2 i 11 f j (2ω 2 ) + i j f i (0) + v 1 p j 2ω 2) 2 ) )]. )

12 Using Lemma 1.7, we obtain ( v 1 p i + Then = i j i=1 v 1 p i ) 2 ( v 1 p i + v 1 p 2 i 4()δ(v) 2 + θ2 n ) ( v 1 p j ) + v 1 p 2 j ) ( v 1 p j + v 1 p 2 j 4δ(v f ) 2 4()δ(v) 2 + θ2 n + 1 ( 2ω 2 vj + 2ω 2 2ω 2 j + v j=1 j 2ω 2 ( v j 2 + ) 2) v j = 4()δ(v) 2 + θ2 n + 2ω ω 2 ()2ω 2 2ω 2 j v j (v j 2ω 2 ) 4()δ(v) 2 + θ2 n + 2ω2 j=1 + ()2ω2 1 ( 1 ρ(δ) ρ(δ) 2ω2 ) = 4()δ(v) 2 + θ2 n + 2ω2 + 2()ρ(δ)2 ω 2 1 2ρ(δ)ω 2.. We conclude this section by presenting a value that we don t allow ω to exceed. We observe that because we need to have δ(v f ) 1/ 2, it follows from Lemma 2.3 that it suffices if 4()δ(v) 2 + θ2 n + 2ω2 + 2()ρ(δ)2 ω 2 1 2ρ(δ)ω 2 2. (2.7) At this stage we decide to choose τ = 1 8, θ = α 4, α 1. (2.8) n Since the left-hand side of (2.7) is monotonically increasing with respect to ω 2, then, for n 1 and δ(v) τ, together with (2.6), one can verify that ω 1 4 δ(v f ) 1 2. (2.9) 12

13 2.2 Upper bound for ω(v) Let us denote the null space of the matrix Ā as L. So, L := {ξ R n : Āξ = 0}. Obviously, the affine space {ξ R n : Āξ = θνrb 0} equals df x + L. The row space of Ā equals the orthogonal complement L of L, and d f s θνvs 1 rc 0 + L. We recall a lemma from Roos [8]. Lemma 2.4. Let q be the (unique) point in the intersection of the affine spaces d f x + L and d f s + L. Then 2ω(v) q 2 + ( q + 2σ(v)) 2. Proof. We can get the proof easily by using v p v instead of v 1 v, then the result follows by replacing δ(v) by σ(v). Recall from (2.9) that in order to guarantee that δ(v f ) 1/ 2 we want to have ω 1/4. Due to Lemma 2.4 this will certainly hold if q satisfies Now still from Roos [8], we can get q 2 + ( q + 2σ(v)) (2.10) µ q θνζ e T ( x s + s ). (2.11) x To further proceed we need upper and lower bounds for the elements of the vectors x and s. 2.3 Bounds for x/s and s/x Recall that x is feasible for (P ν ) and (y, s) for (D ν ) and, moreover σ(x, s; µ) τ, i.e., these iterates are close to the µ-centers of (P ν ) and (D ν ). Based on this information we need to estimate the sizes of the entries of the vectors x/s and s/x. Since τ = 1/8, we can again use a result from Roos [8], namely, Corollary A.10, which gives x s x(µ, ν) 2, µ s x s(µ, ν) 2. µ 13

14 Substitution into (2.11) yields This implies x(µ, µ q θνζ 2e T ν)2 ( + µ s(µ, ν)2 ). µ µ q 2θνζ x(µ, ν) 2 + s(µ, ν) 2. By using µ = µ 0 ν = ζ 2 ν and θ = α/(4 n), we obtain the following upper bound for the norm of q: α q 2 2ζ x(µ, ν) 2 + s(µ, ν) 2. n Define κ(ζ, ν) = x(µ, ν) 2 + s(µ, ν) 2 2 nζ, 0 < ν 1, µ = µ 0 ν, and κ(ζ) = max κ(ζ, ν), 0<ν 1 we may have q 1 2 α κ(ζ). We find in (2.10) that in order to have δ(v f ) 1/ 2, we should have q 2 + ( q + 2σ(v)) 2 1/4. Therefore, since σ(v) δ(v) τ = 1/8, it suffices if q satisfies q 2 + ( q )2 1/4. This holds if and only if q ( 3 1)/4. We conclude that if we take α = 1 2 κ(ζ), (2.12) we will certainly have δ(v f ) 1/ 2. κ(ζ) = 2n. Following Roos [8], we can prove that 3 Iteration bound In the previous sections we have found that if at the start of an iteration the iterates satisfy δ(x, s; µ) τ, with τ as defined in (2.8), then after the feasibility 14

15 step, with θ as in (2.8), and α as in (2.12), the iterates satisfy δ(x, s; µ + ) 1/ 2. According to (1.6), at most log 2 (log 2 1 τ 2 ) = log 2(log 2 64) centering steps suffice to get iterates that satisfy δ(x, s; µ + ) τ. So each iteration consists of at most 4 so-called inner iterations, in each of which we need to compute a new search direction. In each main iteration both the duality gap and the norms of the residual vectors are reduced by the factor. Hence, using (x 0 ) T s 0 = nζ 2, the total number of iterations is bounded above by Since 1 θ log max{nζ2, rb 0, r0 c }. ε θ = α 4 n = 1 8 n κ(ζ), the total number of inner iterations is therefore bounded above by 32 2n log max{nζ2, rb 0, r0 c }. ε 4 Concluding remarks In this paper we introduce a kernel function in the infeasible interior-point algorithm with full-newton step for linear programming. For p [0, 1), the polynomial complexity can be obtained. And for p = 1, the kernel function will lose some barrier properties. We shall investigate it for one specific case. More importantly, we expect further research for a larger scope of p. References [1] Y. Bai, M. El Ghami, C. Roos. A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM Journal on Optimization, 13(3): , [2] Z. Liu and W. Sun. An infeasible interior-point algorithm with full-newton step for linear optimization. Numerical Algorithms, 46(2): ,

16 [3] Z. Liu and W. Sun. A full-newton step infeasible interior-point algorithm for linear programming based on a special kernel function. Working paper. [4] H. Mansouri and C. Roos. Simplified O(nL) infeasible interior-point algorithm for linear optimization using full-newton step. Optimization Methods and Software, 22(3): , [5] S. Mizuno. Polynomiality of infeasible-interior-point algorithms for linear programming. Mathematical Programming, 67(1): , [6] J. Peng, C. Roos, and T. Terlaky. Self-regular functions and new search directions for linear and semidefinite optimization. Mathematical Programming, 93(1): , [7] C. Roos, T. Terlaky, and J.-Ph.Vial. Theory and Algorithms for Linear Optimization. An Interior Approach. John Wiley and Sons, Chichester, UK, [8] C. Roos. A full-newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM Journal on Optimization, 16(4): , [9] Y. Ye, M.J. Todd, and S. Mizuno. An O( nl)-iteration homogeneous and self-dual linear programming algorithm. Mathematical of Operations Research, 19(1):53 67,

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Fall 2012 Infeasible Full-Newton-Step Interior-Point Method for

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

1. Introduction and background. Consider the primal-dual linear programs (LPs)

1. Introduction and background. Consider the primal-dual linear programs (LPs) SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 12, No. 3, pp. 782 810 c 2002 Society for Industrial and Applied Mathematics WARM-START STRATEGIES IN INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING E. ALPER YILDIRIM AND STEPHEN J.

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Finding a point in the relative interior of a polyhedron

Finding a point in the relative interior of a polyhedron Report no. NA-07/01 Finding a point in the relative interior of a polyhedron Coralia Cartis Rutherford Appleton Laboratory, Numerical Analysis Group Nicholas I. M. Gould Oxford University, Numerical Analysis

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information