On Superlinear Convergence of Infeasible InteriorPoint Algorithms for Linearly Constrained Convex Programs *


 Suzanna Taylor
 1 years ago
 Views:
Transcription
1 Computational Optimization and Applications, 8, (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible InteriorPoint Algorithms for Linearly Constrained Convex Programs * RENATO D.C. MONTEIRO FANGJUN ZHOU School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA Received July 18, 1995; Revised April 18, 1996; Accepted July 12, 1996 Abstract. This note derives bounds on the length of the primaldual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain invariance property. We illustrate the usefulness of these bounds by establishing the superlinear convergence of the algorithm presented in Wright and Ralph [22] for solving the optimality conditions associated with a linearly constrained convex program satisfying the above conditions. Keywords: InfeasibleInteriorPoint algorithm, affine scaling, convex program, superlinear convergence 1. Introduction During the past few years, we have seen the appearance of many papers dealing with primaldual (feasible and infeasible) interior point algorithms for linear programs (LP), convex quadratic programs (QP), monotone linear complementarity problems (LCP) and monotone nonlinear complementarity problems (NCP) that are superlinearly or quadratically convergent. For LP and QP, these works include [1, 4, 5, 19, 23, 25, 27, 28, 29]. For LCP, we mention the papers [10, 11, 12, 13, 21, 24] and for NCP, we cite [3, 14, 15, 22]. In this paper we are interested in the superlinear convergence analysis of infeasible interior point algorithms for solving the linearly constrained convex program minimize x f(x) subject to Ax = b, x 0, (1) where x IR n,a IR m n,b IR m,f:ir n IRis a sufficiently smooth convex function, the feasible set {x Ax = b, x 0} is nonempty and m<n. A key result used in the superlinear convergence analysis of several feasible and infeasible interior point algorithms for convex QP and monotone LCP problems is the fact that the length of the primaldual affine scaling direction at a given primaldual infeasible interior point (x, s, y) IR 2n ++ IR m satisfying some centrality condition is bounded above by * The work was based on research supported by the Office of Naval Research under grants N and N
2 246 MONTEIRO AND ZHOU Cx T s, for some constant C > 0, whenever the problem has a solution satisfying strict complementarity. The goal of this paper is to show that the primaldual affine scaling directions associated with problem (1) also satisfies a similar bound whenever the Hessian 2 f( ) of the objective function f( ) satisfies a certain invariance property and the problem has a solution satisfying strict complementarity. The invariance property is satisfied by all functions of the form f(x) =u(ex)+c T x, where E IR l n, c IR n and u : IR l IRis a twice continuously differentiable function such that 2 u(y) > 0, for all y IR l. We illustrate the usefulness of these bounds by establishing the superlinear convergence of the algorithm presented in Wright and Ralph [22] for solving the (mixed) NCP (see relations (2)(6) below) determined by the optimality conditions associated with a convex program (1) satisfying the following two conditions: the Hessian 2 f( ) satisfies the invariance property cited above and (1) has a solution satisfying strict complementarity. We should mention that the bounds derived in this paper can also be used to establish the superlinear convergence of other algorithms for solving (2) (6). For example, Monteiro and Wright [8] develops a primaldual feasibleinteriorpoint algorithm for solving (2) (6) which can be shown to converge superlinearly with the help of these bounds. Since these bounds hold for both feasible and infeasible points, we chose an infeasibleinteriorpoint algorithm such as the one developed by Wright and Ralph as the focus of our presentation. An interesting question, which we do not attempt to answer in this paper, is whether the bounds derived here can also be used to establish the superlinear convergence of the infeasibleinteriorpoint method introduced by Kojima, Megiddo and Noma [3] without assuming the existence of a unique nondegenerate solution. The following notation is used throughout the paper. IR p, IR p + and IR p ++ denote the p dimensional Euclidean space, the nonnegative orthant of IR p and the positive orthant of IR p, respectively. The set of all p q matrices with real entries is denoted by IR p q. The diagonal matrix corresponding to a vector u is denoted by diag (u). The ith component of a vector u IR p is denoted by u i and, for an index set α {1,...,p}, the subvector [u i ] i α is denoted by u α.ifα {1,...,p}, β {1,...,q}and Q IR p q, we let Q αβ denote the submatrix [Q ij ] i α,j β ;ifβ={1,...,q}we denote Q αβ simply by Q α and if α = {1,...,p}we denote Q αβ by Q β or Q β. For a vector u, the Euclidean norm, the 1norm and the norm are denoted by, 1 and, respectively. Given a matrix Q IR p q, we let Range (Q) {Qv v IR q } and Null (Q) {v IR q Qv =0}.We say that (B,N) is a partition of {1,...,p}if B N = {1,...,p}and B N =. The superscript T denotes transpose. For u, v IR n, we let [u, v] {tu +(1 t)v:t [0, 1]} denote the line segment whose endpoints are u and v. 2. Description of the problem and the main results In this section we introduce the notation, terminology and assumptions to be used throughout the paper. We also state the main result of this paper on the existence of certain bounds on the length of the primaldual affine scaling directions associated with (1) when this problem has a solution satisfying strict complementarity and the Hessian 2 f( ) of the objective function of (1) satisfies a certain invariance condition. Finally, we discuss the implication of
3 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 247 these bounds to the superlinear convergence analysis of infeasible interior point algorithms for solving (1). It is wellknown that x IR n is an optimal solution of (1) if and only if there exists (y, s) IR m IR n satisfying the following firstorder optimality conditions for (1): A T y + s = f(x), Ax = b, x 0, s 0, x T s = 0. (2) (3) (4) (5) (6) We start by stating the assumptions that will be used throughout our presentation. Assumption 1 rank(a) =m; Assumption 2 the function f is convex and twice continuously differentiable; Assumption 3 there exists a partition (B,N) of {1,...,n}and a solution (x,s,y )of (2) (6) such that x B > 0 and s N > 0; Assumption 4 the subspace A(x) Null ( 2 f(x))) Null (A) is constant for every x IR n +. Assumptions 1 and 2 are quite standard. It is wellknown that Assumption 3 plays an important role in proving that a large class of primaldual interior point algorithms converges superlinearly. Monteiro and Wright [9] shows that this assumption is indeed necessary for superlinear convergence of methods that behave like Newton s method near the solution, including the one discussed in this note. We next discuss Assumption 4, which looks unusual at first sight. First note that Assumption 4 clearly holds when f is a convex quadratic function. More generally, it is easily seen that any function of the form f(x) =u(bx)+c T x, where B IR l n, c IR n and u : IR l IR is a twice continuously differentiable function such that 2 u(y) > 0, for all y IR l, satisfies Assumption 4. Conversely, under Assumption 2, it follows as a consequence of Lemma 5.1 of [7] that if the stronger condition that Null( 2 f(x)) be constant on IR n holds then f has the above form. In view of Assumption 4, from now on we denote the constant subspace A(x), for x IR n +, simply by A. Any vector d IR n can be written as d = d A + d, d A A, d A, (7) where A denotes the orthogonal complement of A. The primaldual affine scaling search direction ( x, s, y) at a given infeasible interior point (x, s, y) IR 2n ++ IR m is computed by applying one step of Newton s method to the
4 248 MONTEIRO AND ZHOU nonlinear system defined by (2), (3) and XSe =0, where X = diag (x), S diag (s) and e =(1,...,1) IR n. Hence S x + X s = SXe, (8) A x = (Ax b), (9) 2 f(x) x + A T y + s = ( f(x)+a T y+s). (10) Similar to the case in which f( ) is a convex quadratic function, bounds on the affine scaling directions ( x, s, y) in terms of the duality gap x T s (or a certain power of it with an exponent close to 1) is usually obtained for points in a neighborhood of the central path. We next define a neighborhood of the central path which generalizes two other neighborhoods that have been considered in the literature (see for example [2, 10, 11, 18]). Given parameters δ 1,δ 2 0,η 1 >0and η 2 0, let N (δ 1,η 1,δ 2,η 2 ) where ( ) r P r = r D Ñ (δ 1,η 1,ρ) (x, s, y) IR 2n ++ IR m : η 2 (x T s) 1 δ2 r, x i s i η 1 (x T s) 1+δ1, i =1,...,n, (11) ( ) Ax b s f(x)+a T. (12) y Observe that when δ 2 = δ 1 /(1 + δ 1 ), the above neighborhood reduces to the neighborhood ρ(x T s) r 1+δ1, (x, s, y) IR 2n ++ IR m : x i s i η 1 (x T s) 1+δ1, i =1,...,n, (13) where ρ η 1+δ1 2. The neighborhood (13) is used in the algorithm presented in [11]; it is a generalization of the feasible neighborhood obtained by setting ρ =0in (13), which was independently introduced in [10, 18]. On the other hand, if δ 1 = δ 2 =0, the neighborhood (11) reduces to a neighborhood which was introduced by Kojima, Megiddo and Mizuno [2] and subsequently used in several papers dealing with infeasibleinteriorpoint algorithms (see for example [5, 16, 19, 20, 21, 22, 26, 30]). Wright and Ralph [22] have discussed an algorithm for solving a monotone NCP and analyzed its superlinear convergence properties. A slight modification of their algorithm can be used to solve the (mixed) NCP (2)(6) as well, and hence, the convex program (1). It is not our intention to restate the algorithm of [22] here since the description of the method, besides being lengthy, is not important for our presentation; only the necessary aspects of the algorithm will be discussed. For the purpose of summarizing the properties of the algorithm in [22], we let {(x k,s k,y k )} denote the sequence of iterates generated by this algorithm for solving (2) (6) and let {r k }
5 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 249 denote the corresponding sequence of residual vectors obtained by (12) with (x, s, y) = (x k,s k,y k ). First, we observe that the sequence {r k } satisfies the property that r k [0,r 0 ] for all k 0. Second, for some η 1,η 2 >0, there holds: {(x k,s k,y k )} N(0,η 1,0,η 2 ). (14) Third, it is shown in [22] that, for every k sufficiently large, the search direction at the kth iteration is equal to the affine scaling direction computed by (8)(10) with (x, s, y) = (x k,s k,y k ); the iterations for which this property holds are called fast steps in [22]; on the other hand, the search direction used in a nonfast iteration, called a safe step in [22], is a linear combination of the affine scaling direction and the centering direction. The main result obtained in [22] can be summarized as follows: under Condition A stated below, every accumulation point of the sequence {(x k,s k,y k )}is a solution of (2) (6) and the sequences { r k } and {x kt s k } converge to zero Rsuperlinearly and Qsuperlinearly, respectively. Condition A: The algorithm generates a bounded sequence {(x k,s k,y k )}and there exists a constant C>0such that the affine scaling direction ( x k, s k, y k ) calculated via (8)(10) by setting (x, s, y) =(x k,s k,y k )satisfies ( x k, s k ) Cx kt s k, (15) for all k sufficiently large. The main goal of this paper is to show that relation (15) of Condition A holds when (1) satisfies Assumptions 1, 2, 3 and 4 and the sequence {x k } is bounded. Specifically, we show in the next section that the following result holds. Theorem 1 Suppose that Assumptions 1, 2, 3 and 4 hold and let X be a bounded set. Let a point (x 0,s 0,y 0 ) IR 2n ++ IR m and parameters δ 1,δ 2,η 2 0, and η 1 > 0 be given. Then, there exists a constant C 0 with the following property: for any (x, s, y) N (δ 1,η 1,δ 2,η 2 )and t [0, 1/2] satisfying Ax b = t(ax 0 b), (16) f(x)+a T y+s = t( f(x 0 )+A T y 0 +s 0 ), (17) and the conditions x T s min(1,x 0T s 0 ) and x X, the corresponding solution ( x, s, y) of (8)(10) satisfies max{ x, s } C(x T s) 1 δ, (18) where δ δ 1 +2δ 2. The proof of Theorem 1 follows as an immediate consequence of the more general result given in Theorem 2 (see the paragraph after the proof of Theorem 2). Using Theorem
6 250 MONTEIRO AND ZHOU 1, it is now easy to see that Assumptions 1, 2, 3 and 4 imply relation (15) of Condition (A) whenever {x k } is bounded. Indeed, since r k [0,r 0 ], it follows that conditions (16) and (17) are automatically satisfied by every iterate (x k,s k,y k ). Moreover, for every k sufficiently large, the iterate (x k,s k,y k )satisfies the other conditions of Theorem 1, due to the observations preceding Condition A. Hence, it follows from (18) that (15) holds. (Here, δ =0due to (14) and the fact that δ δ 1 +2δ 2.) Before ending this section, we observe that the requirement that the sequence (x k,s k,y k ) be bounded in Condition A is automatically satisfied in certain situations. We have the following result whose proof is given in the appendix; this result in the context of convex QP and monotone LCP is wellknown (see for example Lemma 2.1 of [9]). Proposition 1 Suppose that Assumptions 1 and 2 hold. For some constant t (0, 1), let {t k } k=1 [0, t] and {(x k,s k,y k )} k=0 IR2n ++ IR m be two sequences satisfying ( ) ( ) r k Ax k b Ax f(x k )+A T y k +s k = t 0 b k f(x 0 )+A T y 0 +s 0, k 1. Then the sequence {(x k,s k,y k )}is bounded whenever either one of the conditions below holds: (a) the sequence {x kt s k } is bounded and there exists a point ( x, s, ȳ) IR 2n ++ IR m satisfying relations (2) and (3); (b) there exists a constant ρ > 0 such that r k x kt s ρ r0, k 1. (19) k x 0T s0 3. Proof of the main result This section is devoted to the proof of Theorem 1. The main result of this section is Theorem 2, which is easily seen to imply Theorem 1. The following inequality is exploited in a number of proofs that follow. Lemma 1 Suppose that Assumption 2 holds. Let (x 0,s 0,y 0 )and (x, s, y) be points such that (16) and (17) are satisfied for some t [0, 1], and let ( x, s, ȳ) be a point such that A x b = 0, (20) f( x)+a T ȳ+ s = 0. (21) Then 0 t 2 x 0T s 0 +(1 t) 2 x T s+x T s+t(1 t) (x 0T s + x T s 0) t (x 0T s+x T s 0) (1 t) ( x T s + x T s ) + t(1 t) ( f(x 0 ) f( x) ) T (x 0 x). (22)
7 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 251 Proof. By (16), (17), (20) and (21), we have A(x tx 0 (1 t) x)=0, (s ts 0 (1 t) s)+( f(x)+t f(x 0 )+(1 t) f( x)) + A T (y ty 0 (1 t)ȳ)=0. Multiplying the second relation by [x tx 0 (1 t) x] T on the left and using the first relation, the fact that t [0, 1] and [ f(x 1 ) f(x 2 )] T (x 1 x 2 ) 0 for all x 1,x 2 IR n, due to Assumption 2, we obtain [ s ts 0 (1 t) s ] T [ x tx 0 (1 t) x ] = [ f(x) t f(x 0 ) (1 t) f( x) ] T [ x tx 0 (1 t) x ] = [ t ( f(x) f(x 0 ) ) +(1 t)( f(x) f( x)) ] T [t(x x 0 )+(1 t)(x x) ] = t 2 ( f(x) f(x 0 ) ) T (x x 0 )+(1 t) 2 ( f(x) f( x)) T (x x) [ ( f(x) + t(1 t) f(x 0 ) ) ] T (x x)+( f(x) f( x)) T (x x 0 ) t(1 t) [ ( f(x) f(x 0 )) T ( (x x 0 )+(x 0 x) ) +( f(x) f( x)) T ( (x x)+( x x 0 ) )] = t(1 t) ( f(x) f(x 0 ) ) T (x x 0 ) +t(1 t)( f(x) f( x)) T (x x) + t(1 t) ( f(x 0 ) f( x) ) T ( x x 0 ) t(1 t) ( f(x 0 ) f( x) ) T ( x x 0 ) Expanding this inequality, we then obtain (22). The next result is an immediate consequence of Lemma 1. Throughout our presentation, we use the following convention: the constants C i s are global ones while the constants L i s have meaning only locally within the proof of a result. Lemma 2 Suppose that Assumption 2 holds. Let a point w 0 (x 0,s 0,y 0 ) IR 2n ++ IR m be given. Then there exists a constant C 0 0 satisfying the following property: for any (x, s, y) IR 2n ++ IR m and t [0, 1] such that (16) and (17) are satisfied, there holds max{t x,t s } C 0 (x T s+t). (23) Proof. Let ( x, s, ȳ) be a solution of (2) (6). Using inequality (22) and the fact that t [0, 1], (x 0,s 0 )>0, (x, s) 0, ( x, s) 0, x T s =0and ( f(x 0 ) f( x)) T (x 0 x) 0, we obtain x 0T (ts)+s 0T (tx) tx 0T s 0 + x T s + t(x 0T s + s 0T x) +t ( f(x 0 ) f( x) ) T (x 0 x) = L 1 (x T s + t),
8 252 MONTEIRO AND ZHOU where L 1 1+x 0T s 0 +x 0T s+s 0T x+( f(x 0 ) f( x)) T (x 0 x). Relation (23) follows by letting C 0 = L 1 min i=1,...,n {min{x 0 i,s0 i }}. The next result gives a preliminary bound on the primaldual (scaled) affine scaling directions. Lemma 3 Suppose that Assumption 2 holds and let u 0,r 0 IR n,η 1 >0and δ 1 0 be given. Then, for every vector (x, s, y) IR 2n ++ IR m and scalar t 0 such that ( ) ( ) Ax b Au 0 s f(x)+a T = t y r 0, (24) min x is i η 1 (x T s) 1+δ1, (25) i=1,...,n the corresponding direction ( x, s, y) determined by (8)(10) satisfies ( x + tu 0 ) T ( s + tv(x))=( x+tu 0 ) T 2 f(x)( x + tu 0 ) 0 (26) and max ( D 1 x, D s ) (x T s) 1/2 + 2t η 1/2 1 (x T s) (1+δ1)/2 [ Su 0 + Xv(x) ], (27) where X diag (x), S diag (s), D X 1/2 S 1/2 and v(x) 2 f(x)u 0 +r 0. Proof. It follows from (9), (10) and (24) that A( x + tu 0 ) = 0, (28) ( s + t 2 f(x)u 0 + tr 0 )+A T y = 2 f(x)( x + tu 0 ). (29) Multiplying (29) on the left by ( x + tu 0 ) T and using (28) and Assumption 2, we obtain (26). To show (27), we first show that max ( D 1 x, D s ) (x T s) 1/2 +2t [ D 1 u 0 + Dv(x) ]. (30) Indeed, it follows from (26) that x T s tu 0T s tv(x) T x t 2 u 0T v(x). (31) Multiplying (8) on the left by (XS) 1/2 and squaring both sides, we obtain D 1 x 2 + D s 2 +2 x T s=x T s,
9 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 253 which, in view of (31), implies D 1 x tdv(x) 2 + D s td 1 u 0 2 t 2 Dv(x)+D 1 u 0 2 x T s. Hence, using the inequality (β 2 + γ 2 ) 1/2 β + γ and the triangle inequality, we obtain D 1 x [ x T s+t 2 Dv(x)+D 1 u 0 2] 1/2 +t Dv(x) (x T s) 1/2 +2t Dv(x) + t D 1 u 0 (x T s) 1/2 +2t [ Dv(x) + D 1 u 0 ]. The same bound for D s can be derived similarly, and hence (30) follows. The proof of (27) is now immediate. Indeed, using (25), we obtain D 1 u 0 2 = Similarly, we have n i=1 s i (u 0 i )2 x i = n i=1 (s i u 0 i )2 Su0 2 x i s i η 1 (x T. (32) 1+δ1 s) Dv(x) 2 Xv(x) 2 η 1 (x T. (33) 1+δ1 s) Relation (27) follows by substituting (32) and (33) into (30). The following result yields bounds on the nonbasic components of x and s. Lemma 4 Suppose that Assumptions 2 and 3 hold. Let a point (x 0,s 0,y 0 ) IR 2n ++ IR m and parameters δ 2,η 2 0be given. Then, there exists a constant C 1 0 with the following property: for any vector (x, s, y) IR 2n ++ IR m and t [0, 1 2 ] satisfying (16) and (17) and the conditions there hold x T s x 0T s 0 and η 2 (x T s) 1 δ2 r, (34) x i C 1 (x T s) 1 δ2, i N, (35) s i C 1 (x T s) 1 δ2, i B. (36) Proof. Let r 0 ( Ax 0 b s 0 f(x 0 )+A T y 0 ). (37) Here, we only consider the infeasible case: r 0 0. Let (x,s,y ) denote the point as in Assumption 3. Setting ( x, s) =(x,s ) in inequality (22) and using the fact that t [0, 1/2], (x, s) > 0, (x 0,s 0 )>0,(x,s ) 0and x T s =0, we obtain x T s + x T s t 1 t x0t s t xt s + t (x 0T s + x T s 0) + t ( f(x 0 ) f(x ) ) T (x 0 x ). (38)
10 254 MONTEIRO AND ZHOU By (16), (17) and (34), we have t = r r 0 η 2(x T s) 1 δ2 r 0. (39) Using (34), (38), (39) and the fact that 1 t 1/2, we obtain x T s + x T s 2x T s + (xt s) 1 δ2 r 0 η 2 (2x 0T s 0 + x 0T s + x T s 0 +( f(x 0 ) f(x )) T (x 0 x )) L 1 (x T s) 1 δ2, where L 1 2 ( x 0T s 0) δ r 0 η 2(2x 0T s 0 + x 0T s + x T s 0 +( f(x 0 ) f(x )) T (x 0 x )). This last relation immediately implies (35) and (36) if we define { C 1 L 1 max max i N 1 s i, max i B 1 x i }. We now use this lemma to bound the nonbasic components of ( x, s). Lemma 5 Suppose that Assumptions 2 and 3 hold. Let a point (x 0,s 0,y 0 ) IR 2n ++ IR m and parameters δ 1,δ 2,η 2 0and η 1 > 0 be given. Then, there exist two constants C 2,C 3 0 with the following property: for any (x, s, y) N(δ 1,η 1,δ 2,η 2 )and t [0, 1/2] satisfying (16) and (17) and the condition there hold x T s min(1,x 0T s 0 ), (40) x N ( C 2 +C 3 2 f(x) ) (x T s) 1 δ, (41) s B ( C 2 +C 3 2 f(x) ) (x T s) 1 δ, (42) where δ δ 1 +2δ 2. Proof. We only give the proof for the case in which r 0 0, where r 0 is given by (37). We first show that there exist two constants L 0,L 1 0such that max{ D 1 x, D s } (L 0 + L 1 2 f(x) )(x T s) (1 δ)/2. (43)
11 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 255 The assumptions of the lemma imply that (23) and (39) hold. These two relations together with (40) then imply max{t x,t s } C 0 (x T s+t) [ C 0 1+ η ] 2 r 0 (x T s) 1 δ2 = L 2 (x T s) 1 δ2, (44) where L 2 C 0 [1 + η 2 / r 0 ]. Recalling the definition of v(x) given in Lemma 3, we then have v(x) = 2 f(x)u 0 + r 0 2 f(x) u 0 + r 0. Since the assumptions of Lemma 3 are satisfied, inequality (27) together with (44) and (40) then yield max{ D 1 x, D s } (x T s) 1/2 2t [ + Su 0 + Xv(x) ] η 1/2 1 (x T s) (1+δ1)/2 (x T s) 1/2 2 ( + u 0 + v(x) ) max( tx η 1/2, ts ) 1 (x T s) (1+δ1)/2 (x T s) 1/2 2L 2 [( f(x) ) u 0 + r 0 ] (x T s) 1 δ2 η 1/2 1 (x T s) { (1+δ1)/2 1+ 2L 2 [( 1+ 2 f(x) ) u 0 + r 0 ]} (x T s) (1 δ1 2δ2)/2, η 1/2 1 where X diag (x) and S diag (s). This inequality clearly implies (43) upon letting L 0 1+2L 2 η 1/2 1 ( u 0 + r 0 )and L 1 2L 2 η 1/2 1 u 0 and noting that δ δ 1 +2δ 2. Considering i N and using the definitions D X 1/2 S 1/2 and δ δ 1 +2δ 2, the formulae (43) and (35) and the inclusion (x, s, y) N(δ 1,η 1,δ 2,η 2 ), we conclude that ( ) 1/2 xi ( x i L0 +L 1 2 f(x) ) (x T s) (1 δ)/2 = s i x i (x i s i ) 1/2 ( L0 + L 1 2 f(x) ) (x T s) (1 δ)/2 ( L 0 + L 1 2 f(x) ) C 1 (x T s) 1 δ2 η 1/2 1 (x T s) (1+δ1)/2 (xt s) (1 δ)/2 = ( C 2 + C 3 2 f(x) ) (x T s) 1 δ, (45) where C 2 nl 0 C 1 η 1/2 1 and C 3 nl 1 C 1 η 1/2 1. Hence, we have proved (41). The proof of (42) is identical. Bounding the remaining components of the search directions, namely x B and s N,is more difficult; we first need to establish five preliminary lemmas. The proof of the first lemma can be found in Monteiro and Wright [10]. It unifies Theorem 2.5 and Lemma A.1 of Monteiro, Tsuchiya and Wang [6], which in turn are based on Theorem 2 of Tseng and Luo [17].
12 256 MONTEIRO AND ZHOU Lemma 6 Let f IR q and H IR p q be given. Then there exists a nonnegative constant M = M(f,H) with the property that for any diagonal matrix D>0and any vector h Range(H), the (unique) optimal solution w = w(d, h) of minimize w f T w Dw 2, (46) subject to Hw = h, (47) satisfies w M { f T } w + h. (48) The next two results characterize the directions x and ( y, s N ) as optimal solutions of certain convex QP problems. Lemma 7 Suppose that Assumptions 2 and 3 hold. For (x, s, y) IR n ++ IR m, let X diag (x), S diag (s), D X 1/2 S 1/2 and ( x, s, y) denote the solution of (8)(10). Then, (u, v) =( y, s N ) solves the problem 1 min u,v 2 D N v 2, (49) s.t. A T Bu = rb D + Q B x s B, (50) A T N u + v = rn D + Q N x, (51) where Q 2 f(x), and r D f(x)+a T y+s. Proof. By (10), we see that (u, v) =( y, s N ) satisfies the constraints (50) and (51). The result follows once we verify that (u, v) =( y, s N ) satisfies the KKT (Karush KuhnTucker) conditions for the above problem, namely: A N D 2 N s N Range(A B ). (52) Indeed, by (8), we have D 2 s = (x + x). (53) By the definition of (B,N),wehavex N =0where x is as in Assumption 3. This implies that b = Ax = A B x B. Using this fact together with (9) and (53), we obtain A N D 2 N s N = A N (x N + x N ) = A B (x B + x B ) b = A B (x B + x B x B) Range(A B ).
13 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 257 Lemma 8 Suppose that Assumption 2 holds. For (x, s, y) IR n ++ IR m, let X diag (x), S diag (s), D X 1/2 S 1/2 and ( x, s, y) denote the solution of (8)(10). Then, w = xsolves the problem minimize w f(x) T w wt 2 f(x)w D 1 w 2, subject to Aw = b Ax. (54) Proof. Clearly, x is feasible to (54), due to (9). In view of Assumption 2, it remains to verify that x satisfies the first order necessary condition for optimality of (54), namely f(x) + 2 f(x) x + D 2 x Range(A T ). Indeed, by (8), we have D 2 x = (s + s), which together with (10) imply f(x)+ 2 f(x) x + D 2 x = f(x)+ 2 f(x) x s s = A T (y + y) Range(A T ). The following result plays a crucial role in bounding the x B and s N components of the primaldual affine scaling direction. Lemma 9 Suppose that Assumptions 2, 3 and 4 hold. For (x, s, y) IR n ++ IR m, let X diag (x), S diag (s), D X 1/2 S 1/2 and ( x, s, y) denote the solution of (8)(10). Let x = x A + x denote the decomposition of x according to (7). Then there exists a constant C 4 > 0 such that. x C 4 ( x + r + x N ). (55) Proof. Let E be a matrix such that Null (E) =A. In view of Lemma 8 and the fact that E x = E x, it follows that x solves the problem min { f(x) T w + 12 wt 2 f(x)w + 12 } D 1 w 2 Aw = b Ax, Ew = E x. We will now simplify the objective function of (56). Indeed, let w be a feasible solution of (56). Using the fact that w x Null (E) =A Null ( 2 f(x)), wehave w T 2 f(x)w=( x ) T 2 f(x) x. Let (x,s,y )denote the point as in Assumption 3. For every d A, applying the mean value theorem to the function λ f(x + λ(x x )) T d, and using Assumption 4, we conclude that f(x) T d f(x ) T d = (x x ) T 2 f(x +ξ(x x ))d =0, where ξ [0, 1]. Since w x A,we conclude that f(x) T (w x )= f(x ) T (w x ), and hence, that the quantity f(x) T w f(x ) T wis independent of the feasible solution w considered. The above observations yield (56)
14 258 MONTEIRO AND ZHOU x = argmin { f(x ) T w + 12 } D 1 w 2 Aw = b Ax, Ew = E x. (57) Observing that f(x ) T w =(A T y +s ) T w=s T w+y T Aw = s T w + y T (b Ax), (58) for every feasible solution w of (57), we obtain x = argmin {s T w + 12 } D 1 w 2 Aw = b Ax, Ew = E x. (59) Applying Lemma 6 to the above problem, we conclude that there exists L 1 > 0 such that ) x L 1 ( s T x + b Ax + E x ) = L 1 ( s T N x N + r + E x, which yields (55) with C 4 L 1 max{ s N, 1, E }. We are now ready to state the main result of this paper. Theorem 2 Suppose that Assumptions 1, 2, 3 and 4 hold and let X IR n +be a set such that sup { 2 f(x) : x X}<, (60) { d T 2 } f(x)d inf d 2 : x X,d Null (A), d 0 >0. (61) Let a point (x 0,s 0,y 0 ) IR 2n ++ IR m and parameters δ 1,δ 2,η 2 0, and η 1 > 0 be given. Then, there exists a constant C 5 0 with the following property: for any (x, s, y) N (δ 1,η 1,δ 2,η 2 )and t [0, 1/2] satisfying (16) and (17) and the conditions x T s min(1,x 0T s 0 ) and x X, the corresponding solution ( x, s, y) of (8)(10) satisfies max{ x, s } C 5 (x T s) 1 δ, (62) where δ δ 1 +2δ 2. Proof. We only give the proof for the case in which r 0 0, where r 0 is given by (37). Let L 1 and L 2 denote the supremum and infimum in (60) and (61), respectively. Using relations (16) and (17), the fact that x T s 1, δ 2 δ and (x, s, y) N(δ 1,η 1,δ 2,η 2 ), the definition of v(x) given in Lemma 3, the definition of L 1 and Lemma 5, we obtain t = r / r 0 L 3 (x T s) 1 δ2 L 3 (x T s) 1 δ, (63) v(x) = 2 f(x)u 0 + r 0 L 1 u 0 + r 0 L 4 (64) max ( x N, s B ) (C 2 + C 3 L 1 )(x T s) 1 δ = L 5 (x T s) 1 δ, (65)
15 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 259 where L 3 η 2 / r 0 and L 5 C 2 + C 3 L 1. By (63), (64) and (65), we have ( x + tu 0 ) T ( s + tv(x)) = x T B s B + x T N s N +t 2 v(x) T u 0 [ ] +t v(x) T x+u 0 BT sb +u 0 T N sn x B s B + x N s N +t 2 v(x) u 0 +t [ v(x) x + u 0 B s B + u 0 N s N ] L 5 x (x T s) 1 δ +L 5 s N (x T s) 1 δ +L 4 L 2 3 u 0 (x T s) 2 2δ +L 3 (x T s) 1 δ [ L 4 x +L 5 u 0 (x T s) 1 δ + u 0 s N ] L 6 (x T s) 1 δ max { x, s N, (x T s) 1 δ}, (66) where L 6 2L 5 + L 2 3L 4 u 0 + L 3 L 4 + L 3 L 5 u 0 + L 3 u 0. Using relations (26) and (66) and the fact that ( x + tu 0 ) Null (A) and L 2 is the infimum in (61), we obtain L 2 ( x + tu 0 ) 2 ( x + tu 0 ) T 2 f(x)( x + tu 0 ) = ( x+tu 0 ) T ( s + tv(x)) L 6 (x T s) 1 δ max { x, s N, (x T s) 1 δ}. (67) By Lemmas 6 and 7, relations (63) and (65) and the definition of L 1, there exists a constant L 7 > 0 such that ( s N L 7 2 f(x) x + s B + r ) { L 7 L1 x + L 5 (x T s) 1 δ + L 3 r 0 (x T s) 1 δ} L 8 max { (x T s) 1 δ, x }, (68) where L 8 L 7 (L 1 + L 5 + L 3 r 0 ). Combining (67) and (68), it is easy to see that there exists L 9 > 0 such that Thus ( x + tu 0 ) 2 L 9 (x T s) 1 δ max{(x T s) 1 δ, x } (69) x ( x + tu 0 ) + t (u 0 ) ( x + tu 0 ) + L 3 u 0 (x T s) 1 δ L 10 (x T s) (1 δ)/2 max{(x T s) (1 δ)/2, x 1/2 }, where L 10 L 9 + L 3 u 0. This relation together with (55), (63) and (65) imply ( x C 4 x + r + x N ) ( C 4 x +L 3 r 0 (x T s) 1 δ2 +L 5 (x T s) 1 δ) ( C 4 x +L 3 r 0 (x T s) 1 δ +L 5 (x T s) 1 δ) L 11 (x T s) (1 δ)/2 max {(x T s) (1 δ)/2, x 1/2}
16 260 MONTEIRO AND ZHOU where L 11 C 4 (L 5 + L 10 + L 3 r 0 ). It is easy to see that the last relation implies that x L 12 (x T s) 1 δ, where L 12 = max{l 2 11, 1}. This relation together with (65) and (68) clearly imply that (62) holds for some constant C 5 0. Observe that Theorem 1 is an immediate consequence of Theorem 2. Indeed, when the set X is bounded, conditions (60) and (61) follows from Assumptions 2 and Appendix In this appendix, we give the proof of Proposition 1. Proof of Proposition 1: We first prove (a). For any k 1, setting (x, s) =(x k,s k )in (22) and using the fact that t (0, 1), t k [0, t], (x 0,s 0 ) > 0, (x k,s k ) > 0, ( x, s) > 0 and [ f(x 0 ) f( x)] T (x 0 x) 0, we obtain (1 t k )( x T s k + x kt s) t 2 kx 0T s 0 +(1 t k ) 2 x T s+x kt s k +t k (1 t k ) (x 0T s + x T s 0) +t k (1 t k ) [ f(x 0 ) f( x) ] T (x 0 x) x 0T s 0 +(1 t k ) x T s + x kt s k +(1 t k ) (x 0T s+ x T s 0) +(1 t k ) [ f(x 0 ) f( x) ] T (x 0 x). The above inequality, together with the fact that 1 t k 1 t >0, implies x T s k + x kt s 1 ( x 1 t 0T s 0 + x kt s k) + x T s+x 0T s+ x T s 0 + [ f(x 0 ) f( x) ] T (x 0 x). Since ( x, s) > 0, the boundedness of the sequence {(x k,s k )} follows immediately from the boundedness of the sequence {x kt s k }. We next prove (b). Let (ˆx, ŝ, ŷ) be the solution of (2) (6). For any k 1, setting (x, s) =(x k,s k ) in (22) and using the fact that t (0, 1), t k [0, t], (x 0,s 0 ) > 0, (x k,s k )>0,(ˆx, ŝ) 0, ˆx T ŝ =0and [ f(x 0 ) f(ˆx)] T (x 0 ˆx) 0, we obtain t k (x 0T s k + x kt s 0 ) t 2 kx 0T s 0 +(1 t k ) 2ˆx T ŝ+x kt s k +t k (1 t k ) (x 0T ŝ +ˆx T s 0) +t k (1 t k ) [ f(x 0 ) f(ˆx) ] T (x 0 ˆx) t k x 0T s 0 + x kt s k + t k ( x 0T ŝ +ˆx T s 0) +t k [ f(x 0 ) f(ˆx) ] T (x 0 ˆx).
17 SUPERLINEAR CONVERGENCE OF INFEASIBLEINTERIORPOINT 261 Using (19) and the fact that r k = t k r 0,wehavex kt s k /t k x 0T s 0 /ρ, for all k 1. This observation together with the last inequality imply x 0T s k + x kt s 0 x 0T s 0 + xkt s k + x 0T ŝ +ˆx T s 0 t k + [ f(x 0 ) f(ˆx) ] T (x 0 ˆx) x 0T s 0 + x0t s 0 + x 0T ŝ +ˆx T s 0 + ρ [ f(x 0 ) f(ˆx) ] T (x 0 ˆx), (70) Since (x 0,s 0 ) > 0 and the right hand side of (70) does not depend on k, it follows from (70) that {(x k,s k )}is bounded. Using Assumption 1, it is now easy to see that {y k } is also bounded. References 1. C. C. Gonzaga and R. Tapia, On the convergence of the MizunoToddYe algorithm to the analytic center of the solution set, SIAM J. Optim., 7, 1997, pp M. Kojima, N. Megiddo and S. Mizuno, A primaldual infeasibleinteriorpoint algorithm for linear programming, Mathematical Programming, 61, 1993, pp M. Kojima, N. Megiddo and T. Noma, Homotopy continuation methods for nonlinear complementarity problems, Mathematics of Operations Research, 16, 1991, pp S. Mehrotra, Quadratic convergence in a primaldual method, Mathematics of Operations Research, 18, 1993, pp S. Mizuno, Polynomiality of infeasibleinteriorpoint algorithms for linear programming, Mathematical Programming, 67, 1994, pp R. D. C. Monteiro, T. Tsuchiya and Y. Wang, A Simplified Global Convergence Proof of the Affine Scaling Algorithm, Annals of Operations Research, 47, 1993, R. D. C. Monteiro and Y. Wang, Trust region affine scaling algorithms for linearly constrained convex and concave programs, Technical Report, School of ISyE, Georgia Institute of Technology, Atlanta, GA 30332, USA, June, To appear in Mathematical Programming. 8. R. D. C. Monteiro and S. Wright, A globally and superlinearly convergent potential reduction interior point method for convex programming, Technical Report, 92 13, Dept. of Systems and Industrial Engineering, University of Arizona, Tucson, AZ 85721, USA, July, R. D. C. Monteiro and S. Wright, Local convergence of interiorpoint algorithms for degenerate monotone LCPs, Computational Optimization and Applications, 3, 1994, pp R. D. C. Monteiro and S. Wright, Superlinear primaldual affine scaling algorithms for LCP, Mathematical Programming, 69, 1995, pp R. D. C. Monteiro and S. Wright, A superlinear infeasibleinteriorpoint affine scaling algorithm for LCP, SIAM Journal on Optimization, 6, 1996, pp F. A. Potra, An O(nL) infeasibleinteriorpoint algorithm for LCP with quadratic convergence, Ann. of Oper. Res., 62, 1996, pp F. A. Potra, A quadratically convergent predictorcorrector method for solving linear programs from infeasible starting points, Mathematical Programming, 67, 1994, pp F. A. Potra and Y. Ye, Interior point methods for nonlinear complementarity problems, J. Optim. Theory Appl., 88, 1996, pp F. A. Potra and Y. Ye, A quadratically convergent polynomial algorithm for solving entropy optimization problems, SIAM Journal on Optimization, 3, 1993, pp
18 262 MONTEIRO AND ZHOU 16. E. M. Simantiraki and D. F. Shanno, An infeasibleinteriorpoint method for linear complementarity problems, RUTCOR Research Report, RRR 795, Rutgers Center for Operations Research, Rutgers University, New Brunswick, NJ 08903, USA, P. Tseng and Z. Q. Luo, On the convergence of the affine scaling algorithm, Mathematical Programming, 56, 1992, pp Tunçel, L., Constant potential primaldual algorithms: A framework, Mathematical Programming, 66, 1994, pp S. Wright, A pathfollowing infeasibleinteriorpoint algorithm for linear complementarity problems, Optimization Methods and Software, 2, 1993, pp S. Wright, A pathfollowing interiorpoint algorithm for linear and quadratic optimization problems, Ann. Oper. Res., 62, 1996, pp S. Wright, An infeasible interior point algorithm for linear complementarity problems, Mathematical Programming, 67, 1994, S. Wright and D. Ralph, A superlinear infeasibleinteriorpoint algorithm for monotone complementarity problems, Math. Oper. Res., 21, 1996, pp Y. Ye, On the Qorder of convergence of interiorpoint algorithms for linear programming, in Proceedings of the 1992 Symposium on Applied Mathematics, F. Wu, ed., Institute of Applied Mathematics, Chinese Academy of Sciences, 1992, pp Y. Ye and K. Anstreicher, On quadratic and O( nl) convergence of a predictorcorrector algorithm for LCP, Mathematical Programming, 62, 1993, pp Y.Ye,O.Güler, R. A. Tapia and Y. Zhang, A quadratically convergent O( nl) iteration algorithm for linear programming, Mathematical Programming, 59, 1993, pp Y. Zhang, On the convergence of a class of infeasible interior point methods for the horizontal linear complementarity problem, SIAM Journal on Optimization, 4, 1994, pp Y. Zhang and R. A. Tapia, Superlinear and quadratic convergence of primal dual interior point methods for linear programming revisited, Journal of Optimization Theory and Applications, 73, 1992, pp Y. Zhang and R. A. Tapia, A superlinearly convergent polynomial primal dual interior point algorithm for linear programming, SIAM Journal on Optimization, 3, 1993, pp Y. Zhang, R. A. Tapia and J. E. Dennis, On the superlinear and quadratic convergence of primal dual interior point linear programming algorithms, SIAM Journal on Optimization, 2, 1992, pp Y. Zhang and D. Zhang, On polynomiality of the Mehrotratype predictorcorrector interiorpoint algorithms, Mathematical Programming, 68, 1995, pp
Enlarging neighborhoods of interiorpoint algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interiorpoint algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wideneighborhood interiorpoint algorithm
More informationA Generalized Homogeneous and SelfDual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and SelfDual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and selfdual (HSD) infeasibleinteriorpoint
More informationAn O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arcsearch
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationOn the Existence and Convergence of the Central Path for Convex Programming and Some Duality Results
Computational Optimization and Applications, 10, 51 77 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On the Existence and Convergence of the Central Path for Convex
More informationAn Infeasible InteriorPoint Algorithm with fullnewton Step for Linear Optimization
An Infeasible InteriorPoint Algorithm with fullnewton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationA PREDICTORCORRECTOR PATHFOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 3551 DOI: 10.2298/YJOR120904016K A PREDICTORCORRECTOR PATHFOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of WisconsinMadison May 2014 Wright (UWMadison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationPrimaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization
Primaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO  2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationA FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM FOR P (κ)horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTORCORRECTOR INTERIORPOINT ALGORITHM FOR THE SEMIDEFINITE
More informationA WIDE NEIGHBORHOOD PRIMALDUAL INTERIORPOINT ALGORITHM WITH ARCSEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMALDUAL INTERIORPOINT ALGORITHM WITH ARCSEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationPredictorcorrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path
Copyright information to be inserted by the Publishers Predictorcorrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu
More informationNondegeneracy of Polyhedra and Linear Programs
Computational Optimization and Applications 7, 221 237 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Nondegeneracy of Polyhedra and Linear Programs YANHUI WANG AND RENATO D.C.
More information1 Outline Part I: Linear Programming (LP) InteriorPoint Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using InteriorPoint Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationInteriorPoint Methods
InteriorPoint Methods Stephen Wright University of WisconsinMadison Simons, Berkeley, August, 2017 Wright (UWMadison) InteriorPoint Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun AlJeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationLecture 15 Newton Method and SelfConcordance. October 23, 2008
Newton Method and SelfConcordance October 23, 2008 Outline Lecture 15 Selfconcordance Notion Selfconcordant Functions Operations Preserving Selfconcordance Properties of Selfconcordant Functions Implications
More informationA polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques
Math. Program. 86: 9 03 (999) SpringerVerlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen
More informationUniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods
Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving
More informationA SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization
A SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationResearch Note. A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88107 Research Note A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization B. Kheirfam We
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationOn Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming
On Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationA Second FullNewton Step O(n) Infeasible InteriorPoint Algorithm for Linear Optimization
A Second FullNewton Step On Infeasible InteriorPoint Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationPrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method
PrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More information1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye
Probabilistic Analysis of an InfeasibleInteriorPoint Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semidefinite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semidefinite programming problem is to find a matrix X M n for the optimization
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationLecture 10. PrimalDual Interior Point Method for LP
IE 8534 1 Lecture 10. PrimalDual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA ConstraintReduced Variant of Mehrotra s PredictorCorrector Algorithm
A ConstraintReduced Variant of Mehrotra s PredictorCorrector Algorithm Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, Dianne P. O Leary September 24, 2007 Abstract Consider linear programs in
More informationA fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function
A fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interiorpoint algorithm with
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationRachid Benouahboun 1 and Abdelatif Mansouri 1
RAIRO Operations Research RAIRO Oper. Res. 39 25 3 33 DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri
More informationKey words. linear complementarity problem, noninteriorpoint algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NONINTERIORPOINT ALGORITHM FOR P 0 LCPS YUNBIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new noninteriorpoint
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationRoom 225/CRL, Department of Electrical and Computer Engineering, McMaster University,
SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMALDUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHIQUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationOn MehrotraType PredictorCorrector Algorithms
On MehrotraType PredictorCorrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotratype predictorcorrector algorithms. We consider
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPAInstituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de JaneiroRJ CEP 22460320 Brasil
More informationImproved FullNewton Step O(nL) Infeasible InteriorPoint Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s1095700996340 Improved FullNewton Step OnL) Infeasible InteriorPoint Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationCorrectorpredictor methods for monotone linear complementarity problems in a wide neighborhood of the central path
Mathematical Programming manuscript No. will be inserted by the editor) Florian A. Potra Correctorpredictor methods for monotone linear complementarity problems in a wide neighborhood of the central path
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plusproperties and Funiqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FPI book:
More informationA path following interiorpoint algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 3558 JMM A path following interiorpoint algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364765X eissn 15265471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationCONVERGENCE ANALYSIS OF AN INTERIORPOINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIORPOINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationThe proximal mapping
The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt2016fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function
More informationLecture 5. Theorems of Alternatives and SelfDual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and SelfDual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationA priori bounds on the condition numbers in interiorpoint methods
A priori bounds on the condition numbers in interiorpoint methods Florian Jarre, Mathematisches Institut, HeinrichHeine Universität Düsseldorf, Germany. Abstract Interiorpoint methods are known to be
More informationAN INTERIOR POINT METHOD, BASED ON RANKONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANKONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationarxiv: v1 [math.oc] 26 Sep 2015
arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 20182019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationInteriorPoint Methods for Linear Optimization
InteriorPoint Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationOn the Local Quadratic Convergence of the PrimalDual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the PrimalDual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationChapter 6 InteriorPoint Approach to Linear Programming
Chapter 6 InteriorPoint Approach to Linear Programming Objectives: Introduce Basic Ideas of InteriorPoint Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationAPPROXIMATE METHODS FOR CONVEX MINIMIZATION PROBLEMS WITH SERIES PARALLEL STRUCTURE
APPROXIMATE METHODS FOR CONVEX MINIMIZATION PROBLEMS WITH SERIES PARALLEL STRUCTURE ADI BENISRAEL, GENRIKH LEVIN, YURI LEVIN, AND BORIS ROZIN Abstract. Consider a problem of minimizing a separable, strictly
More informationA fullnewton step feasible interiorpoint algorithm for P (κ)lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A fullnewton step feasible interiorpoint algorithm for P κ)lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationInequality Constraints
Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the
More informationDUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS
1 DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS Alexander Shapiro 1 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 303320205, USA, Email: ashapiro@isye.gatech.edu
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationShiqian Ma, MAT258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 13 line supporting sentence
More information6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games
6.254 : Game Theory with Engineering Applications Lecture 7: Asu Ozdaglar MIT February 25, 2010 1 Introduction Outline Uniqueness of a Pure Nash Equilibrium for Continuous Games Reading: Rosen J.B., Existence
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method  A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationDifferentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA
Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics
More informationInfeasible PrimalDual (PathFollowing) InteriorPoint Methods for Semidefinite Programming*
Infeasible PrimalDual (PathFollowing) InteriorPoint Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationIterationcomplexity of firstorder penalty methods for convex programming
Iterationcomplexity of firstorder penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)
More informationNonlinear Programming, Elastic Mode, SQP, MPEC, MPCC, complementarity
Preprint ANL/MCSP8641200 ON USING THE ELASTIC MODE IN NONLINEAR PROGRAMMING APPROACHES TO MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS MIHAI ANITESCU Abstract. We investigate the possibility
More informationNewton s Method. Javier Peña Convex Optimization /36725
Newton s Method Javier Peña Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36725
Newton s Method Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationA Value Estimation Approach to the IriImai Method for Constrained Convex Optimization. April 2003; revisions on October 2003, and March 2004
A Value Estimation Approach to the IriImai Method for Constrained Convex Optimization Szewan Lam, Duan Li, Shuzhong Zhang April 2003; revisions on October 2003, and March 2004 Abstract In this paper,
More informationA Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems
A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction
More informationAn interiorpoint gradient method for largescale totally nonnegative least squares problems
An interiorpoint gradient method for largescale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR0408 Department of Computational and Applied Mathematics Rice
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very wellwritten and a pleasure to read. The
More informationThe Q Method for SecondOrder Cone Programming
The Q Method for SecondOrder Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Secondorder cone programming, infeasible interior point method, the Q method Abstract We develop the Q method
More informationA QUADRATIC CONE RELAXATIONBASED ALGORITHM FOR LINEAR PROGRAMMING
A QUADRATIC CONE RELAXATIONBASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the
More information