Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Size: px
Start display at page:

Download "Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,"

Transcription

1 SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear convergence of a symmetric primal-dual path following algorithm for semidenite programming under the assumptions that the semidenite program has a strictly complementary primal-dual optimal solution and that the size of the central path neighborhood tends to zero. The interior point algorithm considered here closely resembles the Mizuno-Todd-Ye predictor-corrector method for linear programming where it is nown to be quadratically convergent. It is shown that when the iterates are well centered, the duality gap is reduced superlinearly after each predictor step. Indeed, if each predictor step is succeeded by r consecutive corrector steps then the predictor reduces the duality gap superlinearly with order 2.Theproof +2;r relies on a careful analysis of the central path for semidenite programming. It is shown that under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the primal-dual optimal solution set, and the distance from any point on the central path to this analytic center is bounded by the duality gap. Key words. Semidenite programming, central path, path following, superlinear convergence. AMS subject classications. 90C25, 90C26, 90C60.. Introduction. Recently, there have been many interior point algorithms developed for semidenite programming (SDP), see for example [, 2, 4, 8, 2, 4, 5, 7, 2]. These algorithms dier in their choices of scaling matrix, the size of the central path neighborhoods, and stepsize rules, among others. In particular, the algorithms of Kojima-Shindoh-Hara [8] and Nesterov-Todd [4, 5] are based on the primal-dual scaling and they both can be viewed as extensions of the predictor-corrector method for linear programming []. It has been shown [7, 9,4, 5, 7, 2] that these algorithms for SDP retain many important properties of the interior point algorithms for linear programming including polynomial complexity. For an overview of SDP and its applications, we refer to Vandenberghe and Boyd [9]. However, there exists considerable diculty in extending one ey property of the predictor-corrector method for linear programming to the interior point algorithms for SDP. This is the property of quadratic convergence of the duality gap (see [20] for a proof of the LCP case). In some sense, the need for superlinear convergence in solving SDP is more pronounced than that for the linear programming case. This is because for SDP there cannot exist any nite termination procedures as in the case of linear programming. Indeed, the recent papers of Kojima-Shida-Shindoh [7] and Potra-Sheng [6] are both focused on the issue of superlinear convergence for solving SDP. In particular, the latter reference provided a sucient condition for the superlinear convergence of an infeasible path following algorithm, while the former reference [7] established the superlinear convergence of their algorithm [8] under certain ey assumptions. These assumptions are () SDP is nondegenerate in the sense that the Jacobian matrix of its KKT system is nonsingular (2) SDP has a strictly comple- Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, L8S 4L7, Canada (luozq@mcmail.cis.mcmaster.ca). Supported by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG009039, during a research leave to the Econometric Institute, Erasmus University Rotterdam. y Econometric Institute, Erasmus University Rotterdam, The Netherlands (sturm@few.eur.nl). z Econometric Institute, Erasmus University Rotterdam, The Netherlands (zhang@few.eur.nl).

2 2 Z.-Q. LUO, J.F. STURM AND S. ZHANG mentary optimal solution (3) the iterates converge tangentially to the central path in the sense that the size of the central path neighborhood in which the iterates reside must tend to zero. Among these three assumptions for superlinear convergence, (2) is inevitable since it is needed even in the case of LCP (see [20]). Assumption (3) is needed to ensure the duality gap is reduced superlinearly after each predictor step for all points in the central path neighborhood. In reference [7], an example was given which showed that, without the tangential convergence assumption, the duality gap is reduced only linearly after one predictor step for certain points in the central path neighborhood. Our goal in this paper is to establish the superlinear convergence of a symmetric path following algorithm for SDP under only assumptions (2) and (3) (i.e., without the nondegeneracy assumption). In particular, we consider the primal-dual path following algorithm of Nesterov-Todd [4, 5] (later discovered independently by Sturm and Zhang [7] using a V -space notion). In this paper we adopt the framewor of [7] since it greatly facilitates the subsequent analysis. We show that this symmetric primaldual path following algorithm has an order of convergence that is asymptotically quadratic (i.e., sub-quadratic). Indeed, for any given constant positive integer r, the 2 algorithm can be set so that the duality gap decreases superlinearly with order +2;r after one predictor (ane scaling) step followed by (at most) r corrector steps. The cornerstone in our bid to establish this superlinear convergence result is a bound on the distance from any point on the central path to the optimal solution set (see Section 3). Specically, itisshown that, under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the optimal solution set, and that the distance to this analytic center from any pointonthecentral path can be bounded above by the duality gap. These properties of the central path are algorithm-independent and are liely to be useful in the analysis of other interior point algorithms for SDP. The organization of this paper is as follows. At the end of this section, we describe some basic notation to be used in this paper. In Section 2, we will discuss some fundamental bacground notions, and we will mae two assumptions concerning the solution set of the SDP. In Section 3 we will analyze the limiting behavior of the primal-dual central path. In Section 4, the notion of V -space for SDP is reviewed and a path following algorithm in the spirit of [7] isintroduced. The superlinear convergence of this algorithm is established in Section 5. Finally, some concluding remars are given in Section 6. Notation. The space of symmetric n n matrices will be denoted S. Given matrices X and Y in < mm2, the standard inner product is dened by X Y = tr X T Y where tr () denotes the trace of a matrix. The notation X? Y denotes orthogonality in the sense that X Y = 0. The Euclidean norm and its associated operator norm, viz. the spectral norm, are both denoted by. TheFrobenius norm of X p is X F = X X. If X 2Sis positive (semi-) denite, we write (X 0) X 0. The cone of positive semi-denite matrices is denoted by S + and the cone of positive denite matrices is S ++. The identity matrix is denoted by I. We use the standard Landau notation (\big O" and \small o"). In particular, if fu() >0g and fw() >0g are real sequences with w() > 0, then u() = O(w()) means that u()=w() is bounded, independent of u() =o(w()) means that lim!0 u()=w() =0 u() w() means that lim!0 u()=w() = u() =(w()) whenever u()=w()

3 SUPERLINEAR CONVERGENCE FOR SDP 3 and w()=u() are both bounded. For a symmetric positive denite matrix, we use \O" and \" to denote the order of all its eigenvalues. Hence, for U() 2S ++,the notation U() =(w()) signies the existence of ; > 0suchthat ; I U() ;I w() for all >0 Therefore, the condition U() = (w()) implies that U() = (w()) and the diagonal entries are all (w()) too. as U () U 22 () U nn () 2. Problem formulation. A semidenite programming (SDP) problem is given minimize C X subject to A (i) X = b i for i = 2 m, (P) X 0 where C 2S, A () A (2) A (m) 2Sand b 2< m. The decision variable is X 2S. The corresponding dual program can be formulated as and maximize subject to b T P y m i= y ia (i) + Z = C (D) Z 0 Denote the feasible sets of (P) and (D) by F P and F D respectively, i.e. F P = fx 2S A (i) X = b i i= 2 m X 0g F D = fz 2S Z = C ; mx i= y i A (i) for some y 2< m Z 0g We mae the following assumptions throughout this paper. Assumption 2.. There exist positive denite solutions X 2F P and Z 2F D for (P) and (D) respectively. Assumption 2.2. There exists a pair of strictly complementary primal-dual optimal solutions for (P) and (D). Specically, there exists (X Z ) 2F P F D such that ( X Z =0 X + Z 0 Since X Z = Z X =0,we can diagonalize X and Z simultaneously. Therefore, by applying an orthonormal transformation to the problem data if necessary, we can assume without loss of generality that X, Z are both diagonal and of the form (2.) X = B Z = N

4 4 Z.-Q. LUO, J.F. STURM AND S. ZHANG where B = diag( K ), N = diag( K+ n ) for some integer 0 K n and some positive scalars i > 0, i = 2 n. Here the subscripts B and N signify the \basic" and \nonbasic" subspaces (following the terminology of linear programming). Throughout this paper, the decomposition of any n n matrix X is always made with respect to the above partition B and N. In fact, we shall adhere to the following notation throughout XB X X = U XU T X N so X U will always denote the o-diagonal bloc ofx with size K (n ; K), etc. Notice that X 2F P is an optimal solution to (P) if and only if XZ = 0. Hence, by Assumption 2.2, the primal optimal solution set can be written as F P = fx 2F P X U =0andX N =0g Analogously, the dual optimal solution set is given by F D = fz 2F D Z U =0andZ B =0g Given 2< ++, the pair (X Z) 2F P F D is said to be the -center (X() Z()) if and only if (2.2) XZ = I We refer to [8, 8] for a proof of the existence and uniqueness of -centers. The central path of the problem (P) is the curve f(x() Z()) >0g The logarithmic barrier function log det X B interior of the primal optimal solution set is strictly concave on the relative fx 2F P X B 0g Moreover, Assumption 2. implies that F P and F D are bounded. The unique maximizer X a of log det X B on the relative interior of F P is called the analytic center of.itischaracterized by the Karush-Kuhn-Tucer system F P (2.3) 8 >< > X a B Z B = I P m i= y ia (i) B + Z B =0 X a 2F P and Z B 0 In a similar fashion, we dene the analytic center of F D as the maximizer of the logarithmic barrier log det Z N on the relative interior of F D Za is characterized by the Karush-Kuhn-Tucer system 8 >< > X N Z a N = I A (i) X =0 Z a 2F D and X N 0 i = 2 m

5 SUPERLINEAR CONVERGENCE FOR SDP 5 3. Properties of the central path. The notion of central path plays a fundamental role in the development ofinterior point methods for linear programming. In this section, we shall study the analytic properties of the central path in the context of semidenite programming. These properties will be used in Section 5 where we perform convergence analysis of a predictor-corrector algorithm for SDP. For linear programming (i.e., A (i) 's and C are diagonal), it is nown that the central path curve converges (X() Z())! (X a Z a ), as! 0, with (X a Z a ) being the analytic center of the primal and dual optimal solution sets F P and F D respectively ([0]). Another important property of the central path in the context of linear programming is that it never converges tangentially to the optimal face [3]. This means that for any point on the central path, the distance to the end of the central path is of the same order as the distance to the optimal face, viz. O(). The aim of this section is to establish a similar property of the central path for semidenite programming. More specically, we shall prove that X() ; X a + Z() ; Z a = O() We begin with the following lemma which shows that the set f(x() Z()) 0 <<g is bounded. Lemma 3.. For any >0 there holds X() + Z() = O( + ) Proof. Since A (i) (X() ; X()) = 0 for i = m, and Z() ; Z() 2 SpanfA (i) i= mg, it follows that (X() ; X())? (Z() ; Z()).. Using this property, we obtain Since X() 0andZ() 0, we have n + n = X() Z()+X() Z() = X() Z()+Z() X() X() + Z() = O(X() Z()+Z() X()) = O( + ) It follows from Lemma 3. that the central path has a limit point. We will now show that any limit point ofthecentral path f(x() Z())g is a strictly complementary optimal primal-dual pair. Lemma 3.2. For any 2 (0 ) there holds X B () =() Z B () =() X N () =() Z N () = () Hence, any limit point of f(x() Z())g as! 0 is a pair of strictly complementary primal-dual optimal solutions of (P) and (D). Proof. Let 0 < <. For notational convenience, we will use X and Z to denote the matrices X() and Z(). Let (X Z ) be the pair of strictly complementary

6 6 Z.-Q. LUO, J.F. STURM AND S. ZHANG primal-dual optimal solutions postulated by Assumption 2.2. Since (X ; X )? (Z ; Z ), we have 0=(X ; X ) (Z ; Z ) = X Z ; X Z ; X Z = tr (I ; X Z ; XZ ) = n ; KX i= i Z ii ; nx i=k+ i X ii where the last step follows from (2.). Since i > 0 for all i and X ii 0 and Z ii 0 (by the positive semideniteness of X and Z), we obtain Zii = O() i = K X ii = O() i = K + n Since X 0, Z 0, it follows that (3.) X N = O() Z B = O() From X 0andZ 0we obtain Now consider the identities X N ; X T U X; B X U 0 Z B ; Z U Z ; N ZT U 0 log det X = log det X B +logdet(x N ; XU T X; B X U) log det Z = log det Z N + log det(z B ; Z U Z ; N ZT U ) Since det X det Z =det(i) = n,itfollows that log det X + log det Z = n log and 0 = log det X B + log det (X N ; XU T X; B X U) + log det Z N + log det (Z B ; Z U Z ; N ZT U ) By the estimates (3.) and using Lemma 3., we see that X B = O() (X N ; XU T X; B X U)=O() Z N = O() (Z B ; Z U Z ; N ZT U )=O() Therefore each of the four logarithm terms in the preceding equation are bounded from above as! 0. Since these four terms sum to zero, we must have X B = () Z N = () (X N ; XU T X; B X U) = () (Z B ; Z U Z ; N ZT U )=()

7 Together with (3.), this implies SUPERLINEAR CONVERGENCE FOR SDP 7 X N =() Z B =() This completes the proof of the lemma. Lemma 3.2 provides a precise result on the order of the eigenvalues of X B (), X N (), Z B () andz N (). We will now prove a preliminary result on the order of the o-diagonal blocs X U () andz U (). Lemma 3.3. For 2 (0 ), there holds (3.2) X U () = () Z U () ;X U () Z U () =()X U () 2 X U () = o( p ) Z U () = o( p ) as! 0 Proof. By the central path denition, we have XB () X I = U () ZB () X U () T X N () Z U () T Z U () Z N () Expanding the right-hand side and comparing the upper-right corner of the above identity, wehave (3.3) 0=X B ()Z U ()+X U ()Z N () or equivalently, (3.4) Z U () =;X B () ; X U ()Z N () Using X B () = () and Z N () = () (see Lemma 3.2), this implies that Z U () = () X U () This proves the rst part of the lemma. We now prove (3.2). Pre-multiplying both sides of (3.4) by the matrix X U () T yields X U () T Z U () =;X U () T X B () ; X U ()Z N () Now taing the trace of the above matrices, we obtain X U () Z U () =; tr X U () T X B () ; X U ()Z N () = ; tr Z N () =2 X U () T X B () ; X U ()Z N () =2 = ;() X U () 2 where we used the fact that X B () = () and Z N () = (), as shown in Lemma 3.2. This establishes (3.2). It remains to prove the last part of the lemma. We consider an arbitrary convergent sequence f(x( ) Z( )) = 2 g on the central path with! 0 its limit is denoted by X, Z, so that (3.5) X( ) ; X = o() Z( ) ; Z = o()

8 8 Z.-Q. LUO, J.F. STURM AND S. ZHANG By Lemma 3.2, we have X N =0, X U = Z U = 0 and Z B =0. Since (X( ) ; X )? (Z( ) ; Z ), we have (3.6) 0=(X B ( ) ; X B) Z B ( )+2X U ( ) Z U ( ) +X N ( ) (Z N ( ) ; Z N ) Using (3.2) and (3.6), we obtain X U ( ) 2 = ;() (X U ( ) Z U ( )) (3.7) = () ((X B ( ) ; XB) Z B ( )+X N ( ) (Z N ( ) ; Z N )) = o( ) where in the last step we used (3.5) and of Lemma 3.2. This implies that X U () 2 = o() holds true on the entire central path curve, for otherwise there would exist a convergent subsequence f(x( ) Z( )) = 2 g for which lim inf! X U ( ) 2 > 0 contradicting (3.7). The proof is complete.. We now use Lemma 3.2 and Lemma 3.3 to prove the convergence of the central path f(x() Z()) >0g to the analytic center (X a Z a ), and to estimate the rate at which itconverges to this limit. Lemma 3.4. The primal-dual central path f(x() Z()) >0g converges to the analytic centers (X a Z a ) of F P and F D respectively. Moreover, if we let then () = X U() p X B () ; X a B = O((()+ p ) 2 ) Z N () ; Z a N = O((()+ p ) 2 ) Proof. Suppose 0 <<. By expanding X()Z() = I and comparing the upper-left bloc, we obtain I B = X B ()Z B ()+X U ()Z U () T Pre-multiplying both sides with (X B ()) ; yields (3.8) X B () ; = Z B()+ X B() ; X U ()Z U () T Let J be an index set of minimal cardinality such that SpanfA (i) B i 2Jg=SpanfA(i) B i = 2 mg Since Z B = 0, it follows from dual feasibility and (3.8) that (3.9) Z B() = X i2j i ()A (i) B for some scalars i() = X B () ; ; X B() ; X U ()Z U () T

9 SUPERLINEAR CONVERGENCE FOR SDP 9 From Lemma 3.2, we nowthatz B ()= = (). As the matrices A (i) B, i 2J are linearly independent, this implies that the sequences f i () 2 (0 )g, i 2J, are bounded. Moreover, we have from Lemma 3.3 that lim!0 X B() ; X U ()Z U () T =0 Hence, any limit X, i, i 2J for! 0 satises the following nonlinear system of equations (3.0) 8 >< > X X ; B ; i A (i) B =0 i2j A (i) B X B = b i i 2J Moreover, since Z B ()= =()andx B () = () for 2 (0 ), we have X i2j i A(i) B 0 X B 0 By (2.3), this means that X = X a, the analytic center of F P and hence X() ; X a = o() as! 0 Using the linear independence of the matrices A (i) B, i 2J and using the fact that Xa B is positive denite, it can be checed that the Jacobian (with respect to the variables X B and i, i 2J) of the nonlinear system (3.0) is nonsingular at the solution XB a, i, i 2J. Hence we can apply the classical inverse function theorem to the above nonlinear system at the point X B = XB a, i = i i2j,toobtain (3.) X B () ; XB a = O X B() ; ; X i2j i ()A (i) B + X i2j By (3.9) and the denition of (), we obtain from Lemma 3.3 X X B() ; ; i2j i ()A (i) B Also we have from X() 2F P A (i) B X B() ; b i = 2A (i) U ja (i) B X B() ; b i j = X B() ; X U ()Z U () T = O(() 2 ) X U ()+A (i) N X N () = O (() p + ) for i 2J Substituting the above two bounds into (3.) yields X B () ; X a B = O((()+p ) 2 ) It can be shown by an analogous argument that The proof is complete. Z N () ; Z a N = O((()+ p ) 2 )!

10 0 Z.-Q. LUO, J.F. STURM AND S. ZHANG Lemma 3.4 only provides a rough setch of the convergence behavior of the central path as! 0. Our goal is to characterize this convergence behavior more precisely. Theorem 3.5. Let 2 (0 ). There holds (3.2) X B () =() Z N () =() X N () =() Z B () =() and (3.3) X() ; X a = O() Z() ; Z a = O() Proof. The estimate (3.2) is already nown from Lemma 3.2, so we only need to prove (3.3). By Lemma 3.3 and Lemma 3.4, it is sucient to show that X U () = O() Suppose to the contrary that there exists a sequence (3.4) f(x( ) Z( )) = 2 g with X U ( ) > 0 for all and (3.5) lim! X U ( ) =0 With the notation of Lemma 3.4, the condition (3.5) implies (3.6) ( )+ p ( )= X U( ) p By virtue of Lemma 3.2, we can choose the subsequence (3.4) such that Z B ( ) lim! exists, and using Lemma 3.4 and relation (3.6), we can also assume the existence of x B () = lim! X U ( ) (X B( ) ; X a 2 B ) From the existence of the above limits, we obtain (3.7) (X B ( ) ; XB lim a ) Z B( ) = lim x B () Z B( )! X U ( ) 2! Notice that the hypothesis (3.5) implies that lim! X U ( ) 2 (X( ) ; X a )= x B () Using also ZB a =0,wethus obtain for any = 2 that x B () Z B( ) = x B () (Z B( ) ; Z a B ) = lim j! j X U ( j ) 2 ((X( j) ; X a ) (Z( ) ; Z a )) =0

11 SUPERLINEAR CONVERGENCE FOR SDP where the last step is due to the orthogonality condition (X( j );X a )? (Z( );Z a ) for all j and. Therefore, (3.8) 0= lim x B () Z B( ) = lim!! (X B ( ) ; XB a ) Z B( ) X U ( ) 2 where we used (3.7). Analogously, it can be shown that (3.9) X N ( ) (Z N ( ) ; ZN lim a ) =0! X U ( ) 2 Since (X( ) ; X a )? (Z( ) ; Z a ), we have from (3.8) and (3.9) that (X( ) ; X 0 = lim a ) (Z( ) ; Z a )! X U ( ) 2 = lim 2 X U( ) Z U ( )! X U ( ) 2 which clearly contradicts (3.2). The proof is complete. Theorem 3.5 characterizes completely the limiting behavior of the primal-dual central path as! 0. We point out that this limiting behavior was well understood in the context of linear programming and the monotone horizontal linear complementarity problem, see Guler [3] and Monteiro and Tsuchiya [3] respectively. Notice that under a Nondegeneracy Assumption (the reader is refered to Alizadeh, Haeberly and Overton [2] or Kojima, Shida and Shindoh [7] for a discussion on this concept), i.e. the Jacobian of the nonlinear system 8 >< A (i) X = b i i = 2 m P m i= y ia (i) + Z = C > XZ + ZX =0 X 2S Z2S is nonsingular at (X a y a Z a ), the estimates (3.3) follow immediately from the application of the classical inverse function theorem. Thus, the real contribution of Theorem 3.5 lies in establishing these estimates in the absence of the nondegeneracy assumption. It is nown that in the case of linear programming the proof of quadratic convergence of predictor-corrector interior point algorithms required an error bound result of Homan. This error bound states that the distance from any vector x 2< n to a polyhedral set P = fx Ax ag can be bounded in terms of the \amount of constraint violation" at x, namely[ax ; a] +, where [] + denotes the positive part ofavector. More precisely, Homan's error bound ([5]) states that there exists some constant >0such that dist(x P) [Ax ; a] + 8x 2< n Unfortunately, this error bound no longer holds for linear systems over the cone of positive semidenite matrices (see the example below). In fact, much of the diculty in the local analysis of interior point algorithms for SDP can be attributed to this lac

12 2 Z.-Q. LUO, J.F. STURM AND S. ZHANG of an analog of Homan's error bound result (see the analysis of [7, 6]). Specically, without such an error bound result, it is dicult to estimate the distance from the current iterates to the optimal solution set. In essence, what we have established in Theorem 3.5 is an error bound result along the central path. In other words, although a Homan type error bound cannot hold over the entire feasible set of (P), it nevertheless still holds true on the restricted region \near the central path". One consequence of this restriction to the central path is that we will need to require the iterates to stay \suciently close" to the central path in order to establish the superlinear convergence of the algorithm. Such a requirement on the iterates was called \tangential convergence to the optimal solution set" by Kojimaet. al. [7]. Notice that the analysis in this reference required the additional nondegeneracy assumption to establish their superlinear convergence result. In contrast, this assumption is no longer needed in our analysis because Theorem 3.5 holds without the nondegeneracy assumption. Example. Consider the following linear system over the cone of positive semidefinite matrices in < 22 X =0 X 22 = X = X X 2 X 2 X 22 0 Clearly, there is exactly one solution X to the above linear system, namely X 0 0 = 0 For each >0, consider the matrix 2 X() = Clearly, X() 0. The amount of constraint violation is equal to 2. However, the distance X() ; X F =(). Thus, there cannot exist some xed >0suchthat X() ; X 2, for all >0. 4. A polynomial predictor-corrector algorithm. In [4], Nesterov andtodd developed a theoretical foundation for interior point methods for self-scaled cones. This framewor was then used in [5] to generalize primal-dual interior point methods from linear programming to conic convex programming over self-scaled cones. The results of Nesterov and Todd can be specialized to semidenite programming, since the cone S + of positive semi-denite matrices is self-scaled. In fact, the main results in Sturm and Zhang [7] can be viewed as such a specialization. However, unlie Nesterov andtodd [4, 5], Sturm and Zhang [7] adopt the V -space framewor, thus generalizing the approach of Kojima et al. [6]. We begin by summarizing some of the results on V -space path following for SDP that were obtained in [7]. Let (X Z) 2F P F D with X 0 Z 0. Then, there exists a unique positive denite matrix D 2S ++ such that ([7, Lemma 2.]) (4.) X = DZD Let L be such that (4.2) LL T = D

13 and let V = L T ZL. It follows that SUPERLINEAR CONVERGENCE FOR SDP 3 V = L ; XL ;T = L T ZL The quantity (X Z) = I ; L; XZL F serves as a centrality measure, with = X Z=n. Considering the identity (X Z) = I ; L; XZL = F I ; F X=2 ZX =2 it becomes apparent that (X Z) is the same centrality measure used in many other papers, including [8] and [2]. It is easy to see that the central path is the set of solutions (X Z)with(X Z) = 0 or, equivalently, those solutions for which V = p I. Moreover, we have (4.3) ( + (X Z))I V 2 ( ; (X Z))I In V -space path following, we want to drive the V -iterates towards the origin by Newton's method, in such a way that the iterates reside in a cone around the identity matrix. Before stating the Newton equation, we need to introduce the linear space A(L), and its orthogonal complement in S A(L) =SpanfL T A (i) L i = 2 mg A? (L) =fx 2S(L T A (i) L) X = 0 for i = 2 mg A Newton direction for obtaining a ()-center, for some 2 [0 ], is the solution (X Z) of the following system of linear equations ([7], equation (7) therein) (4.4) 8 < X + DZD = Z ; ; X X 2A? (I) Z 2A(I) For =0,we denote the solution of (4.4) by (X p Z p ), the predictor direction. For =, the solution is denoted by (X c Z c ), the corrector direction. If we let X = L ; XL ;T Z = L T ZL then we can rewrite (4.4) as 8 < X +Z = V ; ; V X 2A? (L) Z 2A(L) It follows from orthogonality that (4.5) X p 2 F + Z p 2 F = V 2 F = n

14 4 Z.-Q. LUO, J.F. STURM AND S. ZHANG The corrector direction does not change the duality gap, (4.6) (X +X c ) (Z +Z c )=X Z whereas (4.7) (X + tx p ) (Z + tz p )=(; t)x Z for any t 2<, see equation (6) of [7]. Algorithm SDP() Given (X 0 Z 0 ) 2F P F D with (X 0 Z 0 ) 4. Parameter, 0< (X 0 Z 0 )=n and positive integer r. Let =0. REPEAT (main iteration) Let X = X Z = Z and = X Z=n. Predictor compute (X p Z p ) from (4.4) with =0. Compute the largest step t such that for all 0 t t there holds (X + tx p Z+ tz p ) min(=2 (( ; t) =) 2;r ) Let X 0 = X + t X p Z 0 = Z + t Z p and =min( 4 ( ; t ) =). Corrector FOR i =to r DO Let X = X 0 Z = Z 0. IF (X Z) THEN exit loop. Compute (X c Z c ) from (4.4) with =. Set X 0 = X +X c Z 0 = Z +Z c. END FOR X + = X 0 Z + = Z 0 Set = +. UNTIL convergence. Interestingly, each corrector step reduces ( ) at a quadratic rate as stated in the following result Lemma 4.. If (X Z) then 2 (X +X c Z+Z c ) (X Z) 2 Proof. It follows from Lemma 4.5 in [7] that X +X c 0 Z +Z c 0 Hence, the desired result is an immediate consequence of Lemma 4.4 in [7]. Lemma 4. implies that for any, we have (4.8) (X Z ) ; Also, it follows from (4.6) and (4.7) that for any > (4.9) (X Z ) ; ( ; t ; ) ; = = = = O( )

15 SUPERLINEAR CONVERGENCE FOR SDP 5 Furthermore, if ==4, then only one (instead of r) corrector step is needed to recenter the iterate (see [7]). In other words, the iterations of Algorithm SDP() are identical to those of the primal-dual predictor-corrector algorithm of [7], for all with 4 We can therefore conclude from Theorem 5.2 in [7] that the algorithm yields 4 for all ; p n log( 0 =), where ; is a universal constant, independent of the problem data. Thus, we have the following polynomial complexity result. Theorem 4.2. For each 0 <<(X 0 Z 0 )=n, Algorithm SDP() willgenerate an iterate (X Z ) 2F P F D with (X Z )=n =4 in at most ; p n log( 0 =) predictor-corrector steps, where ; is a constant that is independent of the problem data. In addition to having polynomial complexity, Algorithm SDP() also possesses a superlinear rate of convergence. We prove this in the next section. 5. Convergence analysis. We begin by establishing the global convergence of Algorithm SDP(). Notice that Algorithm SDP() chooses the predictor step length t to be the largest step such that for all 0 t t there holds (5.) (X + tx p Z+ tz p ) min It was shown in [7] (equation (2) therein) that (5.2) 2 (( ; t)=)2;r ( ; t)(x + tx p Z+ tz p ) ( ; t)(x Z)+t 2 X p Z p F = Using (4.5), we thus obtain for 0 t<that (5.3) (X + tx p Z+ tz p ) (X Z)+ nt2 2( ; t) Combining (5.) and (5.3), we can easily establish the global convergence of Algorithm SDP(). Theorem 5.. There holds lim! =0 i.e. Algorithm SDP() is globally convergent. Proof. Due to (4.7), the sequence 0 is a monotonically decreasing sequence. Hence, lim! exists. Suppose contrary to the statement of the lemma that (5.4) = lim! > 0 Consider ; p n log( 0 =). From Theorem 4.2, we have (5.5) =4 and hence, using (4.8), (5.6) (X Z ) ; =min( 4 )=

16 6 Z.-Q. LUO, J.F. STURM AND S. ZHANG Now consider a step length 0 t 05 p =(n) and note from (5.5) that We obtain from (5.3) and (5.6) that ; t 3 4 (X + t(x p ) Z + t(z p ) ) (X Z )+ t 2r n ( ; t) < ( ; t) nt2 2( ; t) 2 ;r where we used (5.5) and the fact that r. By denition of t, this implies that r = n () This, together with (4.7), implies that ; + = (), which contradicts (5.4). Next we proceed to establish the superlinear convergence of Algorithm SDP(). In light of (4.7), we only need to show that the predictor step length t approaches. Hence we are led to bound t from below. For this purpose, we note from (5.2) that, for t 2 (0 ), (5.7) (X + tx p Z+ tz p ) (X Z)+ ; t X p Z p F = Thus, if we can properly bound X p Z p F, then we will obtain a lower bound on the predictor step length t. To begin, let us consider L with Remar that L L T = p X() p I = L ; X()L;T = L T Z()L Now dene the predictor direction starting from the solution (X() Z()) on the central path as follows 8 < ^Xp ()+^Zp () =; p I ^X p () 2A? (L ) ^Z p () 2A(L ) Let ( ^Xa ^Za ) be the analytic center of the optimal solution set in the L -transformed space, ^X a = L ; Xa L ;T ^Za = L T Za L

17 SUPERLINEAR CONVERGENCE FOR SDP 7 We will show in Lemma 5.2 below that^xp () is close to the optimal step ^Xa ; p I for small. We will bound the dierence between ^Xp () and X p afterwards. Lemma 5.2. There holds p I +^X p () ; ^X a + p I +^Zp () ; ^Z a = O( 3=2 ) Proof. Since it follows that ^X a ^Za = L ; Xa Z a L =0 ( p I ; ^X a )( p I ; ^Z a )=( p I ; ^Z a )( p I ; ^X a ) Therefore, the matrix ( p I ; ^X a )( p I ; ^Z a ), or equivalently, the matrix L ; (X() ; X a )(Z() ; Z a )L is symmetric. By the property of the F -norm, we obtain (5.8) ( p I ; ^X a )( p I ; ^Z a ) = L ; F (X() ; X a )(Z() ; Z a )L F =(tr(x() ; X a )(Z() ; Z a ) (X() ; X a )(Z() ; Z a )) =2 = O( 2 ) where the last step follows from Theorem 3.5. Now since ^X a ^Z a = 0 and ^X p ()+ ^Zp () =; p I, wehave As ( p I ; ^Xa )( p I ; ^Za )=I ; p ( ^Xa + ^Za ) = p ( p I +^Xp () ; ^Xa ) + p ( p I +^Z p () ; ^Z a ) p I +^Xp () ; ^Xa 2A? (L ) p I +^Zp () ; ^Za 2A(L ) it follows that p I +^Xp () ; ^Xa = 2 + p I +^Zp () ; ^Za F ( ^Xa ; p I)( ^Za ; p I) 2 F = O( 3 ) 2 F where the last step is due to (5.8). This proves the lemma. Lemma 5.2 applies only to ( ^Xp () ^Zp ()), namely the predictor directions for the points located exactly on the central path. What we need is a similar bound for (X p Z p ) (obtained at points close to the central path). This leads us to bound the dierence ^Xp () ; X p. Indeed, our next goal is to show (Lemma 5.6) that ^Xp () ; X p = O( p (X Z)) F

18 8 Z.-Q. LUO, J.F. STURM AND S. ZHANG We prove this bound by a sequence of lemmas. Let D be given by (4.) and dene D = L ; DL;T so that D = I if X = X() andz = Z(). Choose L by L = L D =2 and notice that indeed LL T = D, as stipulated by (4.2). Lemma 5.3. Suppose (X Z) =2. There holds L ; (X() ; X)L ;T + L T (Z() ; Z)L = O( p (X Z)) Proof. Let x () =L ; (X() ; X)L ;T z () =L T (Z() ; Z)L Clearly, x () and z () are symmetric and x ()? z (). Let denote the smallest eigenvalue of x ()+ z (), i.e. Since X Z = X() Z() =n, wehave = arg maxf x ()+ z () Ig tr (Z(X() ; X)+X(Z() ; Z)) = tr ((X() ; X)Z + X(Z() ; Z)) = ; tr ((X() ; X)(Z() ; Z)) ; tr XZ +trx()z() =0 where the last step follows from (X() ; X)? (Z() ; Z). Recall that V = L ; XL ;T = L T ZL. Consider tr (V ( x ()+ z ())) = tr (L T Z(X() ; X)L ;T + L ; X(Z() ; Z)L) = tr (Z(X() ; X)+X(Z() ; Z)) =0 By (4.3), the matrix V is symmetric positive denite and V =( p ). Diagonalize the symmetric matrix x ()+ z () =Q T Q and consider 0 = tr (V ( x ()+ z ())) = tr (V Q T Q) = tr (QV Q T ) Since QV Q T =( p ), the diagonal entries of QV Q T must be ( p ). Therefore, the preceding equation implies that the diagonal matrix must have a nonpositive eigenvalue and that its diagonal entries are of same order of magnitude. In other words, 0 and = O(jj). This further implies (5.9) x ()+ z () = O(jj) By the denition of the central path, we have I =(V + x ())(V + z ()) = V + x()+ z () 2 V + x()+ z () 2 + x() ; z () 2 ; x() ; z () 2

19 SUPERLINEAR CONVERGENCE FOR SDP 9 Since the left hand side matrix is symmetric, the sew-symmetric cross term must cancel when we expand the matrix product in the right hand side. It follows that and therefore, I = V + x()+ z () 2 2 ; 4 ( x() ; z ()) 2 Using (4.3), we obtain V + x()+ z () 2 p I jj = O( p (X Z)) Combining this with (5.9) and using the fact that x ()? z (), we have x () + z () = O(jj) =O( p (X Z)) Lemma 5.4. Suppose (X Z) =2. Then there holds D ; I = O((X Z)) Proof. Notice that (5.0) and (5.) L ; XL;T = p I + L ; (X ; X())L ;T L T ZL = p I + L T (Z ; Z())L Now using L ; XL;T = D(L T ZL ) D we have, by pre- and post-multiplying (5.0) by D ;=2 and (5.) by D =2 and rearranging terms, p ( D ; ; D)=L ; (X() ; X)L ;T + L T (Z ; Z())L Together with Lemma 5.3, this implies D = () and D ; I = O((X Z)) The lemma is proved. Now, let ^Xp = D =2 X p D =2 ^Zp = D ;=2 Z p D ;=2 Notice that ( ^Xp ^Zp ) 2A? (L ) A(L ). Lemma 5.5. Suppose (X Z) =2. We have ^Xp ; X p + ^Zp ; Z p = O( p (X Z))

20 20 Z.-Q. LUO, J.F. STURM AND S. ZHANG Proof. Wehave ^X p = D =2 X p D =2 = X p +( D =2 ; I) X p D =2 + X p ( D =2 ; I) Now using Lemma 5.4 and (4.5), we see that It can be shown in an analogous way that ^X p ; X p = O( p (X Z)) ^Z p ; Z p p = O( (X Z)) Now we are ready to bound the dierence between ^Xp () and X p. Lemma 5.6. Suppose (X Z) =2. We have ^Xp () ; X p + ^Zp () ; Z p = O( p (X Z)) Proof. By denition of the predictor directions, we have ^X p ()+^Z p () =; p I and X p + Z p = ;V Combining these two relations yields ^Xp () ; ^Xp +^Zp () ; ^Zp = V ; p I + X p ; ^Xp + Z p ; ^Zp Now using Lemma 5.5 and using the fact that V ; p I = (V + p I) ; (V 2 ; I) p (X Z) we obtain ^Xp () ; ^Xp +^Zp () ; ^Zp = O( p (X Z)) Since ( ^X p (); ^X p )? ( ^Z p (); ^Z p ), the lemma follows from the above relation, after applying Lemma 5.5 once more. X p Z p, and hence, using (5.7), we can estimate the predictor step length t. Combining (5.8), Lemma 5.2 and Lemma 5.6 we cannow estimate the order of Lemma 5.7. We have X p Z p = O(( + (X Z))) (5.2) Proof. Combining Lemma 5.6 with Lemma 5.2, we have p I + X p ; ^Xa + p I + Z p ; ^Za = O( p ( + (X Z)))

21 SUPERLINEAR CONVERGENCE FOR SDP 2 so that, using (4.5), (5.3) p I ; ^Xa + p I ; ^Za = O( p ) Moreover, X p Z p =(^Xa ; p I)( ^Za ; p I)+(^Xa ; p I)( p I + Z p ; ^Za ) +( p I + X p ; ^Xa ) Z p Applying (5.8), (5.2), (5.3) and (4.5) to the above relation yields X p Z p = O(( + (X Z))) Theorem 5.8. The iterates (X Z ) generated byalgorithm SDP() converge to (X a Z a ) superlinearly with order 2=( + 2 ;r ). The duality gap converges to zero at the same rate. Proof. From (5.7) we see that for any t 0 satisfying there holds ; + X p Z p F = ( ; t)(( ; t) =) 2;r (X + tx p Z+ tz p ) (( ; t)=) 2;r This implies using (4.9) and Lemma 5.7 that so that ( ; t ) +2;r ( ; + X p Z p F = )( =) ;2;r = O( ;2;r ) + =(; t ) = O( 2=(+2;r ) ) This shows that the duality gap converges to zero superlinearly with order 2=(+2 ;r ). It remains to prove that the iterates converge to the analytic center with the same order. Notice that (5.4) X ; X( ) 2 F = tr (X ; X( ))L ;T (L T L)L ; (X ; X( )) L T L tr (X ; X( ))L ;T L ; (X ; X( )) = L T L tr L ; (X ; X( ))(X ; X( ))L ;T L T L 2 L ; (X ; X( ))L ;T 2 F However, using the denition of the F -norm and applying Lemma 5.4, L T L F = LL T F = L DL T F = O(L L T F ) Recall that L L T = p X( )by denition, so that using Lemma 3., (5.5) L T L F = O( p )

22 22 Z.-Q. LUO, J.F. STURM AND S. ZHANG Combining (5.4) and (5.5) with Lemma 5.3, we have X ; X( ) F = O( p L ; (X ; X( ))L ;T F )=O((X Z )) = O( ) Hence, we obtain from Theorem 3.5 that Similarly, it can be shown that X ; X a F = O( ) Z ; Z a F = O( ) This shows that the iterates converge to the analytic center R-superlinearly, with the same order as converges to zero. 6. Conclusions. We have shown the global and superlinear convergence of the predictor-corrector algorithm SDP, assuming only the existence of a strictly complementary solution pair. The local convergence analysis is based on Theorem 3.5, which states that X();X a +Z();Z a = O(). By enforcing (X Z )! 0, the iterates \inherit" this property of the central path. For the generalization of the Mizuno- Todd-Ye predictor-corrector algorithm in [7], we do not enforce (X Z )! 0, and hence we cannot conclude superlinear convergence for it yet. In this respect, it will be interesting to study the asymptotic behavior of the corrector steps. Finally, it is liely that our line of argument can be applied to the infeasible primal-dual path following algorithms of Kojima-Shindoh-Hara [8] and Potra-Sheng [6]. Acnowledgment. We wish to than the referees for providing careful comments which led to substantial improvement in the presentation of the paper. REFERENCES [] F. Alizadeh, Interior point methods in semidenite programming with applications to combinatorial optimization problems, SIAM J. Optim. 5 (995), pp. 3{5. [2] F. Alizadeh, J. A. Heaberly and M. Overton, Primal-dual interior-point methods for semidenite programming convergence rates, stability and numerical results, Technical report, Computer Science Department, NewYor University, New Yor, 996. [3] O. Guler, Limiting behavior of weighted central paths in linear programming, Math. Programming 65 (994), pp. 347{363. [4] C. Helmberg, F. Rendl, R. J. Vanderbei and H. Wolowicz, An interior-point method for semidenite programming, SIAM J. Optim. 6 (996), pp. 342{36. [5] A. J. Hoffman, On approximate solutions of systems of linear inequalities, Journal of Research of the National Bureau of Standards 49 (952), pp. 263{265. [6] M. Kojima, N. Megiddo, T. Noma and A. Yoshise, A Unied Approach to Interior Point Algorithms for Linear Complementarity Problems, Springer-Verlag, Berlin, 99. [7], M. Shida and S. Shindoh, Local convergence ofpredictor-corrector infeasible-interiorpoint algorithms for SDPs and SDLCPs, Research Reports on Information Sciences, B- 306, Dept. of Information Sciences, Toyo Institute of Technology, 2-2- Oh-Oayama, Meguro-u, Toyo 52, Japan, 995. [8], S. Shindoh, and S. Hara, Interior-point methods for the monotone linear complementarity problem in symmetric matrices, SIAM J. Optim. 7 (997), pp. 86{25. [9] C. J. Lin and R. Saigal, An infeasible start predictor corrector method for semi-denite linear programming, Technical report, Dept. of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, USA, 995. [0] N. Megiddo, Pathways to the optimal solution set in linear programming in Progress in Mathematical Programming, Interior-Point and Related Methods, N. Megiddo ed., Springer- Verlag, New Yor, 989, pp. 3{58.

23 SUPERLINEAR CONVERGENCE FOR SDP 23 [] S. Mizuno, M. J. Todd and Y. Ye, On Adaptive-Step Primal-Dual Interior-Point Algorithms for Linear Programming, Math. Oper. Res. 8 (993), pp. 964{98. [2] R. D. C. Monteiro, Primal-dual path following algorithms for semidenite programming, Technical report, School of Industrial and Systems Engineering, Georgia Tech, Atlanta, Georgia, U.S.A., 995. To appear in SIAM J. Optim. [3] and T. Tsuchiya, Limiting behavior of the derivatives of certain trajectories associated with a monotone horizontal linear complementarity problem, Math. Oper. Res. 2 (996), pp. 793{84. [4] Y. Nesterov and M. J. Todd, Self-scaled barriers and interior-point methods for convex programming, Math. Oper. Res. 22 (997), pp. {42. [5] and, Primal-dual interior-point methods for self-scaled cones, Technical report 25, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New Yor, 995. [6] F. A. Potra and R. Sheng, A superlinearly convergent primal-dual infeasible-interior-point algorithm for semidenite programming, Report on Computational Mathematics No. 78, Dept. of Mathematics, The University of Iowa, Iowa City, USA, 995. [7] J. F. Sturm and S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming, Technical report 9554/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, 995. [8] L. Vandenberghe and S. Boyd, A primal-dual potential reduction method for problems involving matrix inequalities, Math. Programming 69 (995), pp. 205{236. [9] and, Semidenite programming, SIAM Rev. 38 (996), pp. 49{95. [20] Y. Ye and K. Anstreicher, On quadratic and O( p nl) convergence of a predictor-corrector algorithm for LCP, Math. Programming 62 (993), pp. 537{55. [2] Y. Zhang, On extending primal-dual interior-point algorithms from linear programming to semidenite programming, Technical report, Dept. of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland, U.S.A., 995.

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise 1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming. Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING MASAKAZU MURAMATSU AND ROBERT J. VANDERBEI ABSTRACT. In this paper, we give an example of a semidefinite programming problem in which

More information

March 18,1996 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING 1 Zhi-Quan Luo 2, Jos F. Sturm 3 and Shuzhong Zhang 3 ABSTRACT This paper consider

March 18,1996 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING 1 Zhi-Quan Luo 2, Jos F. Sturm 3 and Shuzhong Zhang 3 ABSTRACT This paper consider March 8,99 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING Zhi-Quan Luo, Jos F. Sturm and Shuzhong Zhang ABSTRACT This paper considers the problem of minimizing a linear function over the intersection

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye Probabilistic Analysis of an Infeasible-Interior-Point Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

Though the form of the SDP (1) appears very specialized, it turns out that it is widely encountered in systems and control theory. Examples include: m

Though the form of the SDP (1) appears very specialized, it turns out that it is widely encountered in systems and control theory. Examples include: m Algorithms and Software for LMI Problems in Control Lieven Vandenberghe y and Venkataramanan Balakrishnan z A number of important problems from system and control theory can be numerically solved by reformulating

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Abstract. A new class of continuation methods is presented which, in particular,

Abstract. A new class of continuation methods is presented which, in particular, A General Framework of Continuation Methods for Complementarity Problems Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1990 Abstract. A new class of continuation methods is presented which,

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

arxiv: v1 [math.oc] 9 Sep 2013

arxiv: v1 [math.oc] 9 Sep 2013 Linearly Convergent First-Order Algorithms for Semi-definite Programming Cong D. Dang, Guanghui Lan arxiv:1309.2251v1 [math.oc] 9 Sep 2013 October 31, 2018 Abstract In this paper, we consider two formulations

More information

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION

ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION J. Appl. Math. & Informatics Vol. 33(2015), No. 1-2, pp. 229-234 http://dx.doi.org/10.14317/jami.2015.229 ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION YOON J. SONG Abstract. In the setting

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 3, pp. 818 836 c 2001 Society for Industrial and Applied Mathematics CONDITION-MEASURE BOUNDS ON THE BEHAVIOR OF THE CENTRAL TRAJECTORY OF A SEMIDEFINITE PROGRAM MANUEL A. NUNEZ

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP

A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP Lu Li, and Kim-Chuan Toh April 6, 2009 Abstract Convex quadratic semidefinite programming (QSDP) has been

More information

Nondegenerate Solutions and Related Concepts in. Ane Variational Inequalities. M.C. Ferris. Madison, Wisconsin

Nondegenerate Solutions and Related Concepts in. Ane Variational Inequalities. M.C. Ferris. Madison, Wisconsin Nondegenerate Solutions and Related Concepts in Ane Variational Inequalities M.C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 53706 Email: ferris@cs.wisc.edu J.S. Pang

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem Volume 118 No. 6 2018, 287-294 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

SEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM. Abstract

SEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM. Abstract SEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM Qing Zhao x Stefan E. Karisch y, Franz Rendl z, Henry Wolkowicz x, February 5, 998 University of Waterloo CORR Report 95-27 University

More information

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information