A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
|
|
- Cornelius Oliver
- 5 years ago
- Views:
Transcription
1 A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos URL: roos 37th Annual Iranian Mathematics Conference Tabriz, Iran September 2 5, A.D. 2006
2 Outline Milestones in the history of linear optimization State-of-the-art in IIPMs Usual search direction Complexity results for feasible IPM Definition of perturbed problems Idea underlying the algorithm Complexity analysis Conjecture Some references Announcement: Today there will be another lecture at 11:30 in Seminar 1, entitled Introduction to Conic Optimization, with some applications Optimization Group 1
3 Milestones in the history of linear optimization Standard linear optimization problem (P) min { c T x : Ax = b, x 0 }, A is m n 1947: Simplex Method (Dantzig) 1955: Logarithmic Barrier Method (Frisch) 1967: Affine Scaling Method (Dikin) 1967: Center Method (Huard) 1968: Barrier Methods (Fiacco, McCormick) 1972: Exponential example (Klee and Minty) 1979: Ellipsoid Method (Khachiyan) 1984: Projective Method (Karmarkar) Interior Point Methods for Convex Optimization (1989) Semidefinite Optimization (1994) Second Order Cone Optimization (1994) Discrete Optimization (1994) Optimization Group 2
4 Some keyplayers Optimization Group 3
5 A is m n and rank(a) = m. State-of-the-art in IIPMs (P) min { c T x : Ax = b, x 0 } (D) max { b T y : A T y + s = c, s 0 } The best know iteration bound for IIPMs is { (x max 0 ) T s 0, b Ax 0 } c, A T y 0 s 0 O n log ǫ x 0 > 0, y 0 and s 0 > 0 denote the starting points, and b Ax 0 and c A T y 0 s 0 are the initial primal and dual residue vectors, respectively. Assumption: the initial iterates are (x 0, y 0, s 0 ) = ζ(e,0, e) where ζ satisfies ζ x, s, for some optimal triple (x, y, s ). The best know iteration bound for solving these problems (with feasible IPMs) is ( n ) n O log. ǫ. Optimization Group 4
6 Usual search directions A x = b Ax A T y + s = c A T y s s x + x s = µe xs. When moving from x to x + := x + x the new iterate x + satisfies Ax + = b, but not necessarily x + 0. To keep the iterate positive, one uses a damped step x + := x+α x, where α < 1 denotes the step size. Our search directions are designed in such a way that full Newton steps can be used; each iteration reduces the sizes of the residual vectors with exactly the same speed as the duality gap. Our aim is to show that this is a promising strategy. Optimization Group 5
7 Feasible full-newton step IPMs Central path Ax = b, x 0 A T y + s = c, s 0 xs = µe. If this system has a solution, for some µ > 0, then a solution exists for every µ > 0, and this solution is unique. This happens if and only if (P) and (D) satisfy the interior-point condition (IPC), i.e., if (P) has a feasible solution x > 0 and (D) has a solution (y, s) with s > 0. If the IPC is satisfied, then the solution is denoted as (x(µ), y(µ), s(µ)), and called the µ-center of (P) and (D). The set of all µ-centers is called the central path of (P) and (D). As µ goes to zero, x(µ), y(µ) and s(µ) converge to optimal solution of (P) and (D). The system is hard to solve, but by applying Newton s method one can easily find approximate solutions. Optimization Group 6
8 Definition of the Newton step Given a primal feasible x > 0, and dual feasible y and s > 0, we want to find displacements x, y and s such that A(x + x) = b, A T (y + y) + s + s = c, (x + x)(s + s) = µe. Neglecting the quadratic term x s in the third equation, and using b Ax = 0 and c A T y s = 0, we obtain the following linear system of equations in the search directions x, y and s. A x = 0, A T y + s = 0, s x + x s = µe xs. Since A has full rank, and the vectors x and s are positive, the coefficient matrix in this linear system is nonsingular. Hence the system uniquely defines the search directions x, y and s. These search directions are used in all existing primal-dual (feasible) IPMs and called after Newton. Optimization Group 7
9 Properties of the Newton step The new iterates are given by A x = 0, A T y + s = 0, s x + x s = µe xs. x + = x + x, y + = y + y, s + = s + s. An important observation is that x lies in the null space of A, whereas s belongs to the row space of A. This implies that x and s are orthogonal, i.e., ( x) T s = 0. As a consequence we have the important property that after a full Newton step the duality gap assumes the same value as at the µ-centers, namely nµ. Lemma 1 After a (feasible!) primal-dual Newton step one has ( x +) T s + = nµ. Optimization Group 8
10 Fundamental results We use the quantity δ(x, s; µ) to measure proximity of a feasible triple (x, y, s) to the µ-center (x(µ), y(µ), s(µ)). δ(x, s; µ) := δ(v) := 1 v v 1 xs where v := 2 µ. Lemma 2 Let (x, s) be a positive primal-dual pair and µ > 0 such that x T s = nµ. Moreover, let δ := δ(x, s; µ) and let µ + = (1 θ)µ. Then δ(x, s; µ + ) 2 = (1 θ)δ 2 + θ2 n 4(1 θ). Lemma 3 If δ := δ(x, s; µ) 1, then the primal-dual Newton step is feasible, i.e., x + and s + are nonnegative. Moreover, if δ < 1, then x + and s + are positive and δ(x +, s + ; µ) δ 2. 2(1 δ 2 ) Corollary 1 If δ := δ(x, s; µ) 1 2, then δ(x +, s + ; µ) δ 2. Optimization Group 9
11 Primal-Dual full-newton step feasible IPM Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1; feasible (x 0, y 0, s 0 ) with ( x 0) T s 0 = nµ 0, δ(x 0, s 0, µ 0 ) 1/2. begin x := x 0 ; y := y 0 ; s := s 0 ; µ := µ 0 ; while x T s ε do begin µ-update: end end µ := (1 θ)µ; centering step: (x, y, s) := (x, y, s) + ( x, y, s); Theorem 1 If θ = 1/ 2n, then the algorithm requires at most 2n log nµ0 ǫ The output is a primal-dual pair (x, s) such that x T s ǫ. iterations. Optimization Group 10
12 Graphical illustration primal-dual full-newton-step path-following method z 0 µe central path δ(v) τ z k = x k s k (1 θ)µe z 1 One iteration. Optimization Group 11
13 Definition of perturbed problems We start with choosing arbitrarily x 0 > 0 and y 0, s 0 > 0 such that x 0 s 0 = µ 0 e for some (positive) number µ 0. For any ν with 0 < ν 1 we use the perturbed problem (P ν ), defined by (P ν ) min { (c ν ( c A T y 0 s 0)) T x : Ax = b ν ( b Ax 0 ), x 0 and its dual problem (D ν ), which is given by (D ν ) min { (b ν ( b Ax 0 )) T y : A T y + s = c ν ( c A T y 0 s 0), s 0 }, }. Note that if ν = 1 then x = x 0 yields a strictly feasible solution of (P ν ), and (y, s) = (y 0, s 0 ) a strictly feasible solution of (D ν ). Conclusion: if ν = 1 then (P ν ) and (D ν ) satisfy the IPC. Optimization Group 12
14 Perturbed problems satisfy IPC Lemma 4 The original problems, (P) and (D), are feasible if and only if for each ν satisfying 0 < ν 1 the perturbed problems (P ν ) and (D ν ) satisfy the IPC. Proof: Let x be feasible for (P) and (y, s ) for (D). Then Ax = b and A T y + s = c, with x 0 and s 0. Now let 0 < ν 1, and consider x = (1 ν) x + ν x 0, y = (1 ν) y + ν y 0, s = (1 ν) s + ν s 0. One has Ax = A ( (1 ν) x + ν x 0) = (1 ν) b + νax 0 = b ν ( b Ax 0), showing that x is feasible for (P ν ). Similarly, A T y + s = (1 ν) ( A T y + s ) + ν ( A T y 0 + s 0) = c ν ( c A T y 0 s 0), showing that (y, s) is feasible for (D ν ). Since ν > 0, x and s are positive, thus proving that (P ν ) and (D ν ) satisfy the IPC. To prove the inverse implication, suppose that (P ν ) and (D ν ) satisfy the IPC for each ν satisfying 0 < ν 1. Obviously, then (P ν ) and (D ν ) are feasible for these values of ν. Letting ν go to zero it follows that (P) and (D) are feasible. Optimization Group 13
15 Central path of the perturbed problems We assume that (P) and (D) are feasible. Letting 0 < ν 1, the problems (P ν ) and (D ν ) satisfy the IPC, and hence their central paths exist. This means that the system b Ax = ν(b Ax 0 ), x 0 c A T y s = ν(c A T y 0 s 0 ), s 0 xs = µe has a unique solution, for every µ > 0. In the sequel this unique solution is denoted as (x(µ, ν), y(µ, ν), s(µ, ν)). These are the µ-centers of the perturbed problems (P ν ) and (D ν ). Note that since x 0 s 0 = µ 0 e, x 0 is the µ 0 -center of the perturbed problem (P 1 ) and (y 0, s 0 ) the µ 0 -center of (D 1 ). In other words, (x(µ 0,1), y(µ 0,1), s(µ 0,1)) = (x 0, y 0, s 0 ). In the sequel we will always take µ = µ 0 ν. Optimization Group 14
16 Idea underlying the algorithm We just established that if ν = 1 and µ = µ 0, then x = x 0 is the µ-center of the perturbed problem (P ν ) and (y, s) = (y 0, s 0 ) the µ-center of (D ν ). These are our initial iterates. We measure proximity to the µ-center of the perturbed problems by the quantity δ(x, s; µ). So, initially, δ(x, s; µ) = 0. We assume that at the start of each of the subsequent iterations δ(x, s; µ) τ, where τ > 0. This is certainly true for the first iteration. Now we describe one iteration of our algorithm. Suppose that for some µ (0, µ 0 ] we have x, y and s that are feasible for (P ν ) and (D ν ), with µ = µ 0 ν, and such that x T s = nµ and δ(x, s; µ) τ. Roughly spoken, one iteration of the algorithm does the following. Reduce µ to µ + = (1 θ)µ and ν to ν + = (1 θ)ν, with θ (0,1). Find new iterates x +, y + and s + that are feasible for (P ν +) and (D ν +), and such that x T s = nµ + and δ(x +, s + ; µ + ) τ. Note that µ + = µ 0 ν +. Optimization Group 15
17 Graphical illustration of one iteration δ(v) 1 2 µe z 1 feasibility step z 0 µe δ(v) τ z 2 first centering step z 3 z 4 central path for ν + central path for ν µ + e µ + e Feasibility step and subsequent centering steps (µ + = (1 θ)µ, ν + = (1 θ)ν). Optimization Group 16
18 Notations We denote the initial values of the primal and dual residuals as rb 0 and r0 c, respectively, as rb 0 = b Ax0, rc 0 = c AT y 0 s 0. Then the feasibility equations for (P ν ) and (D ν ) are given by b Ax = νrb 0, x 0 c A T y s = νrc 0, s 0. and those of (P ν +) and (D ν +) by b Ax = ν + rb 0, x 0 c A T y s = ν + rc 0, s 0. Optimization Group 17
19 Feasibility step Let x be feasible for (P ν ) and (y, s) for (D ν ). To get iterates feasible for (P ν +) and (D ν +) we need f x, f y and f s such that b A(x + f x) = ν + r 0 b c A T (y + f y) (s + f s) = ν + r 0 c. Since x is feasible for (P ν ) and (y, s) for (D ν ), it follows that A f x = (b Ax) ν + r 0 b = νr 0 b ν+ r 0 b = θνr 0 b A T f y + f s = (c A T y s) ν + r 0 c = νr 0 c ν+ r 0 c = θνr 0 c. Therefore, the following system is used to define f x, f y and f s: A f x = θνr 0 b A T f y + f s = θνr 0 c s f x + x f s = µe xs. The new iterates are given by ( x f, y f, s f) = ( x + f x, y + f y, s + f s ). Optimization Group 18
20 Centering steps After the feasibility step the iterates are feasible for (P ν +) and (D ν +) with ν = ν +. The hard part in the analysis will be to guarantee that these iterates lie in the region where Newton s method is quadratically convergent, i.e., δ(x f, s f ; µ + ) 1/ 2. After the feasibility step we perform centering steps in order to get iterates that moreover satisfy x T s = nµ + and δ(x, s; µ + ) τ. By using the quadratic convergence of the Newton step the required number of centering steps can be easily be obtained. Because, assuming δ = δ(x f, s f ; µ + ) 1/ 2, after k centering steps we will have iterates (x, y, s) that are still feasible for (P ν +) and (D ν +) and such that So k must satisfy which certainly holds if δ(x, s; µ + ) ( 1 ( 1 2 ) 2 k 2 τ, ) 2 k ( ) 1 k log 2 log 2 τ 2.. Optimization Group 19
21 Primal-Dual infeasible full-newton step IPM Input: Accuracy parameter ε > 0; barrier update parameter θ, 0 < θ < 1 threshold parameter τ > 0. begin x := x 0 > 0; y := y 0 ; s := s 0 > 0; x 0 s 0 = µ 0 e; µ = µ 0 ; while max ( x T s, b Ax, c A T y s ) ε do begin feasibility step: (x, y, s) := (x, y, s) + ( f x, f y, f s); µ-update: µ := (1 θ)µ; centering steps: end end while δ(x, s; µ) τ do (x, y, s) := (x, y, s) + ( x, y, s); endwhile Optimization Group 20
22 Defining we have Hence v = x f s f Analysis of the feasibility step xs µ, d x := v f x x, d s := v f s, s = x + f x = x + xd x v = x v (v + d x) = s + f s = s + sd s v = s v (v + d s). x f s f = xs + ( s f x + x f s ) + f x f s = µe + f x f s = µ(e + d x d s ). Since µv 2 = xs, after division of both sides by µ + we get ( v f ) 2 x f s f = µ + = µ µ + e + 1 xd x µ + v sd s v = e + d xd s 1 θ. Lemma 5 The iterates ( x f, y f, s f) are strictly feasible if and only if e + d x d s > 0. Corollary 2 The iterates ( x f, y f, s f) are certainly strictly feasible if d x d s < 1. Optimization Group 21
23 In the sequel we denote Proximity after the feasibility step δ(x f, s f ; µ + ) = δ(v f ) = 1 2 ω(v) := 1 2 v f e v f, where ( v f) 2 = e + d x d s 1 θ. d x 2 + d s 2. This implies d x 2ω(v) and d s 2ω(v), and moreover, d T x d s d x d s 1 2 ( dx 2 + d s 2) 2ω(v) 2 d x d s d x d s 2ω(v) 2. Lemma 6 If ω(v) < 1/ 2 then the iterates ( x f, y f, s f) are strictly feasible. Lemma 7 Let ω(v) < 1/ 2. Then one has 4δ(v f ) 2 θ2 n 1 θ + 2ω(v)2 1 θ + (1 θ) 2ω(v) 2 1 2ω(v) 2. Optimization Group 22
24 Choice of θ In order to have δ(v f ) 1/ 2, it follows from Lemma 7 that it suffices if θ 2 n 1 θ + 2ω(v)2 1 θ + (1 θ) 2ω(v) 2 2. (1) 1 2ω(v) 2 Lemma 8 Let ω(v) 1 2 and θ = α 2n, 0 α Then the iterates ( x f, y f, s f) are strictly feasible and δ(v f ) 1 2. n n + 1. (2) At this stage we decide to choose θ := α 2n, α 1 2. Then we have ω(v) = 1 2 d x 2 + d s δ(v f ) 1 2. We proceed by considering the vectors d x and d s more in detail. Optimization Group 23
25 The scaled search directions d x and d s A f x = θνr 0 b A T f y + f s = θνr 0 c s f x + x f s = µe xs. xs v = µ, d x := v f x x, d s := v f s. s The system defining the search directions f x, f y and f s, can be expressed in terms of d x and d s as follows. where Ād x = θνr 0 b, Ā T f y µ + d s = θνvs 1 r 0 c, d x + d s = v 1 v, Ā = AV 1 X. Optimization Group 24
26 Geometric interpretation of ω(v) d x 0 90 o 2ω(v) { ξ : Āξ = θνr 0 b } q 2δ(v) θνvs 1 r 0 c + { Ā T η : η R m} r d s Optimization Group 25
27 Upper bound for ω(v) Let us denote the null space of the matrix Ā as L. So, L := { ξ R n : Āξ = 0 }. Obviously, the affine space { ξ R n : Āξ = θνrb 0 } equals dx + L. The row space of Ā equals the orthogonal complement L of L, and d s θνvs 1 rc 0 + L. Lemma 9 Let q be the (unique) point in the intersection of the affine spaces d x + L and d s + L. Then 2ω(v) q 2 + ( q + 2δ(v)) 2. To simplify the presentation we will denote δ(x, s; µ) below simply as δ (δ τ). Recall that we need 2ω(v) 1. Thus we require q to satisfy q 2 + ( q + 2δ) 2 1. Optimization Group 26
28 Upper bound for q (1) Recall that q is the (unique) solution of the system Āq = θνr 0 b, Ā T ξ + q = θνvs 1 r 0 c. We proceed by deriving an upper bound for q. From the definition of Ā we deduce that Ā = µ AD, where D = diag For the moment, let us write ( xv 1 µ ) = diag ( x s ) = diag ( µ vs 1 ). r b = θνr 0 b, r c = θνr 0 c Then the system defining q is equivalent to µ AD q = rb, µ DA T ξ + q = 1 µ Dr c. Optimization Group 27
29 Upper bound for q (2) This implies µ q = q 1 + q 2, where q 1 := (I DA T ( AD 2 A T ) ) 1 AD Dr c, q 2 := DA T ( AD 2 A T ) 1 rb. Let (y, s ) be such that A T y + s = c and x such that Ax = b. Then r b = θνr 0 b = θν(b Ax0 ) = θνa(x x 0 ) = θνad ( D 1 ( x x 0)) r c = θνr 0 c = θν ( c A T y 0 s 0) = θν ( A T (y y 0 ) + s s 0). Since DA T (y y 0 ) belongs to the row space of AD, we obtain q 1 θν D ( s s 0), q2 θν D 1 ( x x 0). Since µ q = q 1 + q 2 and q 1 and q 2 are orthogonal, we may conclude that where, as always, µ = µ 0 ν. µ q θν D ( s s 0) 2 + D 1 ( x x 0) 2, We are still free to choose x and s, subject to the constraints Ax = b and A T y +s = c. Optimization Group 28
30 Upper bound for q (3) µ q θν D ( s s 0) 2 + D 1 ( x x 0) 2 Let x be an optimal solution of (P) and (y, s ) of (D) (assuming that these exist). Then x and s are complementary, i.e., x s = 0. Let ζ be such that x +s ζ. Starting the algorithm with x 0 = s 0 = ζe, y 0 = 0, µ 0 = ζ 2, the entries of the vectors x 0 x are s 0 s satisfy 0 x 0 x ζe, 0 s 0 s ζe. Thus it follows that D ( s s 0) 2 + D 1 ( x x 0) 2 ζ De 2 + D 1 e 2 = ζ e T ( x s + s x ). Substitution gives µ q θν ζ e T ( x s + s x ). Optimization Group 29
31 Bounds for x and s; choice of τ µ q θν ζ e T ( x s + s x Recall that x is feasible for (P ν ) and (y, s) for (D ν ) and, moreover δ(x, s; µ) τ. Based on his information we need to estimate the sizes of the entries of the vectors x/s and s/x. Since the concerning result does not belong to the mainstream of the analysis we mention it without proof. If ). and δ τ then we have x s 2 x(µ, ν) µ, τ = 1 8 s x 2 s(µ, ν) µ. Optimization Group 30
32 Substitution gives This implies µ q θν ζ Upper bound for q (4) e T ( x s + s x ), x s x(µ, ν) 2, µ µ q θν ζ 2e T x(µ, ν)2 s(µ, ν)2 +. µ µ µ q θνζ 2 s x s(µ, ν) 2. µ x(µ, ν) 2 + s(µ, ν) 2. Therefore, also using µ = µ 0 ν = ζ 2 ν and θ = α, we obtain the following upper bound 2n for the norm of q: q α ζ x(µ, ν) 2 + s(µ, ν) 2. n Optimization Group 31
33 We define and κ(ζ, ν) = q α ζ n Choice of α x(µ, ν) 2 + s(µ, ν) 2. x(µ, ν) 2 + s(µ, ν) 2 ζ, 0 < ν 1, µ = µ 0 ν 2n κ(ζ) = max 0<ν 1 Then q can be bounded above as follows. κ(ζ, ν). q α κ(ζ) 2. We found in () that in order to have δ(v f ) 1/ 2, we should have q 2 +( q + 2δ) 2 1. Therefore, since δ τ = 1 8, it suffices if q satisfies q 2 + ( q + 1 4) 2 1. So we certainly have δ(v f ) 1/ 2 if q 1 2. Since q α κ(ζ) 2, the latter inequality is satisfied if we take 1 α = 2 2 κ(ζ). Note that since x(ζ 2,1) = s(ζ 2,1) = ζe, we have κ(ζ,1) = 1. As a consequence we obtain that κ(ζ) 1. We can prove that κ(ζ) 2n. Optimization Group 32
34 Bound for κ(ζ) (1) We have Ax = b, 0 x ζe A T y + s = c, 0 s ζe x s = 0. To simplify notation, we denote x = x(µ, ν), y = y(µ, ν) and s = s(µ, ν). Then b Ax = ν(b Aζe), x 0 c A T y s = ν(c ζe), s 0 xs = νζ 2 e. This implies Ax Ax = ν(ax Aζe), x 0 A T y + s A T y s = ν(a T y + s ζe), s 0 xs = νζ 2 e. Optimization Group 33
35 Bound for κ(ζ) (2) Ax Ax = ν(ax Aζe), x 0 A T y + s A T y s = ν(a T y + s ζe), s 0 xs = νζ 2 e. We rewrite this system as A(x x νx + νζe) = 0, x 0 A T (y y νy + νζe) = s s + νs νζe, s 0 xs = νζ 2 e. From this we deduce that ( x x νx + νζe ) T ( s s + νs νζe ) = 0. Define a := (1 ν)x + νζe, b := (1 ν)s + νζe. We then have a νζe, b νζe and (a x) T (b s) = 0. The latter gives a T b + x T s = a T s + b T x. Optimization Group 34
36 Bound for κ(ζ) (3) Since x T s = 0, x + s ζe and xs = νζ 2 e, we may write a T b + x T s = ((1 ν)x + νζe) T ((1 ν)s + νζe) + νζ 2 n Moreover, also using a νζe, b νζe, we get = ν(1 ν)(x + s ) T ζe + ν 2 ζ 2 n + νζ 2 n ν(1 ν)ζ 2 n + ν 2 ζ 2 n + νζ 2 n = 2νζ 2 n. a T s + b T x = ((1 ν)x + νζe) T s + ((1 ν)s + νζe) T x = (1 ν) ( s T x + x T s ) + νζe T (x + s) νζ ( s 1 + x 1 ). Hence s 1 + x 1 2ζn. Since x 2 + s 2 ( ) s 1 + x 2 1, it follows that x 2 + s 2 ζ 2n thus proving κ(ζ) 2n. s 1 + x 1 ζ 2n 2ζn ζ 2n = 2n, Optimization Group 35
37 Complexity analysis We have found values for the parameters in the algorithm such that if at the start of an iteration the iterates satisfy δ(x, s; µ) τ = 1 8, then after the feasibility step the iterates satisfy δ(x, s; µ + ) 1/ 2. At most ( ) 1 log 2 log 2 τ 2 = log 2 (log 2 64) 3 centering steps suffice to get iterates that satisfy δ(x, s; µ + ) τ. So each (main) iteration consists of at most 4 so-called inner iterations, in each of which we need to compute a new search direction. In each iteration both the duality gap and the residual vectors are reduced by the factor 1 θ. Hence, using x 0T s 0 = nζ 2 and defining K := max { nζ 2,, }, the total number of main iterations is bounded above by 1 θ log K 2n ǫ = α log K ǫ = 4 κ(ζ) 2n log K ǫ. The total number of inner iterations is therefore bounded above by where κ(ζ) 2n. 16 κ(ζ) 2n log max { nζ 2, r 0 b ǫ, r 0 c }, r 0 b r 0 c Optimization Group 36
38 Conjecture Conjecture 1 If (P) and (D) are feasible and ζ x + s for some pair of optimal solutions x and (y, s ), then κ(ζ) = 1. Evidence was provided by a simple Matlab implementation of our algorithm. As input we used a primal-dual pair of randomly generated feasible problems with known optimal solutions x and (y, s ), and ran the algorithm with ζ = x + s. This was done for various sizes of the problems and for at least 10 5 instances. No counterexample for the conjecture was found. Typically the graph of κ(ζ, ν), as a function of ν is as depicted below. κ(ζ, ν) Typical behavior of κ(ζ, ν) as a function of ν. The importance of the conjecture is evident. Its trueness would reduce the currently best iteration bound for IIPMs by a factor 2n. ν Optimization Group 37
39 Related conjecture Conjecture 2 If (P) and (D) are such that (x 0, y 0, s 0 ) = ζ(e,0, e) is a feasible triple, whereas ζ x + s for some optimal triple (x, y, s ), then x(µ) 2 + s(µ) 2 ζ 1, 0 < µ ζ 2. 2n In other words, x(µ) 2 + s(µ) 2 2nζ 2, 0 < µ ζ 2. Optimization Group 38
40 Concluding remarks The techniques that have been developed in the field of feasible IPMs, and which have now been known for almost twenty years, are sufficient to get an IIPM whose theoretical performance is as good as the currently best known theoretical performance of IIPMs. Following a well-known metaphor of Isaac Newton a, it looks like if a smooth pebble or pretty shell on the sea-shore of IPMs has been overlooked for a surprisingly long time. The full Newton step method presented in this paper is not efficient from a practical point of view. But just as in the case of feasible IPMs one might expect that largeupdate methods for IIPMs can be designed whose complexity is not worse than n times the iteration bound in this paper. Even better results for large-update methods might be obtained by changing the search direction, by using methods that are based on kernel functions. This requires further investigation. Also extensions to secondorder cone optimization, semidefinite optimization, linear complementarity problems, etc. seem to be within reach. a I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me (Westfall). Optimization Group 39
41 Some references M. Kojima, N. Megiddo, and S.Mizuno. A primal-dual infeasible-interior-point algorithm for linear programming. Mathematical Programming 61, pages , M. Kojima, T. Noma, and A. Yoshise. Global convergence in infeasible-interior-point algorithms. Mathematical Programming, Series A, 65:43 72, I. J. Lustig. Feasible issues in a primal-dual interior-point method. Mathematical Programming 67, pages , 1990/91. S. Mizuno. Polynomiality of infeasible-interior-point algorithms for linear programming. Mathematical Programming 67, pages , F. A. Potra. A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points. Mathematical Programming, 67(3): , F. A. Potra. An infeasible-interior-point predictor-corrector algorithm for linear programming. SIAM Journal on Optimization, 6(1):19 32, C. Roos. A full-newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM Journal on Optimization, 16(4): , C. Roos, T. Terlaky, and J.-Ph. Vial. Interior-Point Methods for Linear Optimization. Springer, (2nd edition of Theory and Algorithms for Linear Optimization. An Interior-Point Approach, John Wiley & Sons, Chichester, UK, 1997) M. Salahi, T. Terlaky, and G. Zhang. The complexity of self-regular proximity based infeasible IPMs. Technical Report 2003/3, Advanced Optimization Laboratory, Mc Master University, Hamilton, Ontario, Canada, R.J. Vanderbei. Linear Programming: Foundations and Extensions. Kluwer Academic Publishers, Boston, USA, S.J. Wright. Primal-Dual Interior-Point Methods. SIAM, Philadelphia, Y. Ye. Interior Point Algorithms, Theory and Analysis. John Wiley and Sons, Chichester, UK, Optimization Group 40
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationA Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationMcMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods
McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report
More informationInterior Point Methods for Nonlinear Optimization
Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationInterior-point algorithm for linear optimization based on a new trigonometric kernel function
Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES
More informationA WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationInfeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood
J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationInfeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Fall 2012 Infeasible Full-Newton-Step Interior-Point Method for
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationPrimal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725
Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T
More informationA Simpler and Tighter Redundant Klee-Minty Construction
A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central
More informationA tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes
A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationNew Interior Point Algorithms in Linear Programming
AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationFull-Newton-Step Interior-Point Method for the Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationIMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS
IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationDuality revisited. Javier Peña Convex Optimization /36-725
Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationA Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationA Polynomial Column-wise Rescaling von Neumann Algorithm
A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,
More informationA primal-simplex based Tardos algorithm
A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationAn EP theorem for dual linear complementarity problems
An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationA new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization
A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationinforms DOI /moor.xxxx.xxxx c 200x INFORMS
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 2x, pp. xxx xxx ISSN 364-765X EISSN 526-547 x xxx xxx informs DOI.287/moor.xxxx.xxxx c 2x INFORMS A New Complexity Result on Solving the Markov
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationCurvature as a Complexity Bound in Interior-Point Methods
Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationThe Q Method for Symmetric Cone Programmin
The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra
More information1. Introduction and background. Consider the primal-dual linear programs (LPs)
SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationOn the Sandwich Theorem and a approximation algorithm for MAX CUT
On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationIMPLEMENTATION OF INTERIOR POINT METHODS
IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationDetecting Infeasibility in Infeasible-Interior-Point Methods for Optimization
Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization M. J. Todd January 16, 2003 Abstract We study interior-point methods for optimization problems in the case of infeasibility
More informationAN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationPrimal-Dual Interior-Point Methods
Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationFinding a point in the relative interior of a polyhedron
Report no. NA-07/01 Finding a point in the relative interior of a polyhedron Coralia Cartis Rutherford Appleton Laboratory, Numerical Analysis Group Nicholas I. M. Gould Oxford University, Numerical Analysis
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture 14 Barrier method
L. Vandenberghe EE236A (Fall 2013-14) Lecture 14 Barrier method centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector
More information