A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION
|
|
- Timothy Lyons
- 6 years ago
- Views:
Transcription
1 J Syst Sci Complex (01) 5: A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: /s Received: 3 December 010 / Revised: 18 August 011 c The Editorial Office of JSSC & Springer-Verlag Berlin Heidelberg 01 Abstract Mehrotra-type predictor-corrector algorithm is one of the most effective primal-dual interiorpoint methods. This paper presents an extension of the recent variant of second order Mehrotra-type predictor-corrector algorithm that was proposed by Salahi, et al.(006) for linear optimization. Based on the NT direction as Newton search direction, it is shown that the iteration-complexity bound of the algorithm for semidefinite optimization is O(n 3 log X 0 S 0 ), which is similar to that of the corresponding ε algorithm for linear optimization. Key words Mehrotra-Type algorithm, polynomial complexity, predictor-corrector algorithm, semidefinite optimization. 1 Introduction After the landmark paper of Karmarkar [1], linear optimization (LO) revitalized as an active area of research. Lately the interior-point methods (IPMs) have shown their powers in solving LO problems and large classes of other optimization problems (see []). IPMs are also the powerful tools to solve mathematical programming problems such as complementarity problem (CP), second order conic optimization (SOCO) and semidefinite optimization (SDO). SDO is a generation of LO and it has various applications in diverse areas, such as system and control theory [3] and combinatorial optimization [4]. Generalization of IPMs of LO to the context of SDO started in the early of 1990s. The first IPMs for SDO were independently developed by Alizadeh [4] and Nesterov and Nemirousky [5]. Alizadeh [4] applied Ye s potential reduction ideal to SDO and showed how variants of dual IPMs could be extended to SDO. Almost at the same time, in their milestone book [5], Nesterov and Nemirousky proved IPMs are able to solve general conic optimization problems, in particular SDO problems, in polynomial time. Other IPMs designed for LO have been successfully extended to SDO. For an overview of these related results we refer to the subject monographs [6 7] and their references. Most of these more recent works are concentrated on primal-dual methods. Mehrotra-type predictor-corrector algorithm is one of the most remarkable primal-dual methods, and it is also the base of IPMs software packages, such as [8 10] and many others. In spite of extensive use of this method, not much about its complexitywas knownbefore the recent Mingwang ZHANG College of Science, China Three Gorges University, Yichang 44300, China. zmwang@ctgu.edu.cn. This research was supported by Natural Science Foundation of Hubei Province under Grant No. 008CDZ047. This paper was recommended for publication by Editor Shouyang WANG.
2 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1109 paper by Salahi, et al. [11], which presents a new variant of Mehrotra-type predictor-corrector algorithm for LO. By introducing certain safeguards, this variant enjoys the polynomial iteration complexity, while practical efficiency of the algorithm is preserved. Later on, Salahi and Amiri [1] analyzed a new variation of second order Mehrotra-type predictor-corrector algorithm. He also proved that the algorithm has polynomial iteration complexity. Recently, Koulaei and Terlaky [13] extended the Mehrotra-type predictor-corrector algorithm [11] for LO to SDO. This paper studies the extension of the second order Mehrotra-type algorithm [1] for SDO. The analysis for SDO is more complicated than for LO. A large part of the theoretical difficulty is due to the issue of maintaining symmetry in the linearized complementarity condition [13].The aim of this paper is to establish iteration-complexity bound for a generalization of the Mehrotratype algorithm of [1], based on the NT direction. Borrowing analytic tools from [14], we derive the iteration bound O(n 3 log X0 S 0 ε ) for the algorithm, which is analogous to the linear case. The rest of the paper is organized as follows. In Section, we introduce the SDO problem. We review some basic concepts for IPMs for solving the SDO problem, such as central path, NT-search direction, ect. We conclude this section by presenting a second order Mehrotra type predictor-corrector algorithm for the SDO. In Section 3, we state and prove some technical results. Based on these results, the iteration-complexity bound of the algorithm is established. Finally, conclusion and final remarks are given in Section 4. The following notations are used through the paper. R n denotes the n-dimensional Euclidean space. R n n denotes the set of n n real matrices. F and denote the Frobenius norm and spectral norm for matrices, respectively. S n, S+ n and S++ n denote the cone of symmetric, symmetric positive semidefinite and symmetric positive definite matrices, respectively. For M S n, M 0(M 0) means that M is positive semi-definite (positive definite). Tr(M) denotes the trace of matrix M R n n,tr(m) = n M ii. The matrix inner product is defined by A B =Tr(A T B). For M S n,wedenotebyλ i (M) the eigenvalues of M, λ max (M) andλ min (M) denote the largest and the smallest eigenvalues of M, respectively. Moreover, the spectral condition number of M is denoted by cond(m) =λ max (M)/λ min (M). The Kroneker product of two matrices X and S is denoted by X S (see [15]). For X R n n, the operator vec(x) mapsann n matrix into a vector of length n by stacking the columns of the matrix argument. Finally, I denotes the n n identity matrix. The SDO Problem and Preliminaries In this section, we introduce the SDO problem and state the symmetrization scheme which is used to derive the Newton direction. We also give some extent results and describe our variant of the second order Mehrotra-type predictor-corrector algorithm. We consider the following SDO problem min C X s.t. A i X = b i, i =1,,,m, X 0, (1) where C, X S n, A i S n,,,,m, are linearly independent and b =(b 1,b,,b m ) T R m. We call the problem (1) in the given form the primal problem, and X is the primal matrix variable.
3 1110 MINGWANG ZHANG Corresponding to every primal problem (1), there exists a dual problem max s.t. b T y y i A i + S = C, S 0, () where y R m, S S n and (y, S) is the dual variable. The primal-dual feasible set is defined as { F = (X, y, S) S n R m S n A i X = b i,x 0,,,,m, } y i A i + S = C, S 0, and the relative interior of the primal-dual feasible set is { F = (X, y, S) S++ n Rm S++ n A i X = b i,x 0,,,,m, } y i A i + S = C, S 0. Under the assumptions that F is nonempty and the matrices A i, i =1,,,m are linearly independent, then X and (y,s ) are optimal if and only if they satisfy the optimality condition [7] Ai X = bi, X 0, i =1,,,m, y i A i + S = C, S 0, XS =0, (3) where the last equality is called the complementarity equation. The central path consists of points (X(μ),y(μ),S(μ)) satisfying the perturbed system A i X = b i, i =1,,,m, X 0, y i A i + S = C, S 0, XS = μi, (4) where μ R, μ > 0. It is proved in [5] that there is a unique solution (X(μ),y(μ),S(μ)) to the central path equations (4) for any barrier parameter μ>0, assuming that F is nonempty and the matrices A i, i = 1,,,m, are linearly independent. Moreover, the limit point (X,y,S )asμgoes to 0 is a primal-dual optimal solution for the SDO problem. In the next, we derive the Newton direction for the system (4). Observe that for X, S S n, the product XS is generally not in S n. Hence, the lefthand side of (4) is a map from S n R m S n to R n n R m S n. Thus, the system (4) is
4 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1111 not square system when X and S are restricted to S n, which is needed for applying Newtonlike methods. A remedy for this is to make the perturbed optimality system (4) square by modifying the left-hand side to a map from S n R m S n to itself. To achieve this, Zhang [16] introduced a general symmetrization scheme based on the so-called similar symmetrization operator H P : R n n S n defined as H P (M) 1 [PMP 1 +(PMP 1 ) T ], M R n n, where P R n n is some nonsingular matrix. Zhang [16] also observed that H P (M) =μi M = μi, for any nonsingular matrix P and any matrix M with real spectrum, and any μ R. Therefore, for any given nonsingular matrix P, (4) is equivalent to A i X = b i, i =1,,,m, X 0, y i A i + S = C, S 0, (5) H P (XS)=μI. A Newton-like method applied to system (5) leads to the following linear system: A i ΔX =0, i =1,,,m, Δy i A i +ΔS =0, H P (XΔS +ΔXS)=σμ g I H P (XS), where (ΔX, Δy, ΔS) S n R m S n is the unknown direction (see [14] for more details), σ [0, 1] is the centering parameter, and μ g = X S/n is the normalized duality gap corresponding to (X, y, S). We refer to the directions derived from (6) as the Monteiro-Zhang (MZ) family. The matrix P used in (6) is called the scaling matrix for the search direction. For the choice of P,when P = I, the direction obtained from (6), coincides with AHO direction [17].IfP = X 1 or S 1, the (6) gives the H K M directions [18 0], respectively. Further, we obtain the NT direction when P = W 1 NT,whereW NT is the solution of the system W 1 NT XW 1 NT (6) = S. Nestorov and Todd [1] proved the existence and uniqueness of such as W NT = X 1 (X 1 SX 1 ) 1 X 1. In the paper, we restrict the scaling matrix P to the specific class P(X, S) {P S++ P n XS = SXP }, (7) where X, S S++ n. We should mention that this restriction on P is common for large neighborhood primal-dual IPMs proposed in [13 14]. Furthermore, this restriction on P does not lose any generality, in terms of the solution set of system (6), as Monteiro and Zhang indicated in [14]. Apparently, P = X 1, S 1 and W 1 NT belong to this specific class. However, P = I does not. In what follows we describe the variant of second order Mehrotra-type predictor-corrector algorithm. Let us define (X(α),y(α),S(α)) = (X, y, S)+α(ΔX a, Δy a, ΔS a )+α (ΔX, Δy, ΔS), (8) X(α) S(α) μ g (α) =. (9) n
5 111 MINGWANG ZHANG To prove the convergence, a certain neighborhood of the central path is considered in which the algorithm operates. In this paper, the algorithm uses the so-called negative infinity norm neighborhood that is a large neighborhood, defined as N (γ) ={(X, y, S) F λ min (XS) γμ g }, where γ (0, 1) is a given constant. In the predictor step the algorithm computes the affine search direction, i.e., A i ΔX a =0, i =1,,,m, Δyi a A i +ΔS a =0, H P (XΔS a +ΔX a S)= H P (XS). (10) Then the maximum feasible step is computed, i.e., the largest for which X(α a )=X + α a ΔX a, S(α a )=S + α a ΔS a 0. However, the algorithm does not take such a step. Based on this step size, the algorithm chooses σ =(1 α a ) 3 to compute the corrector direction that is defined as the solution of the system A i ΔX =0, i =1,,,m, Δy i A i +ΔS =0, H P (XΔS +ΔXS)=σμ g I H P (ΔX a ΔS a ). (11) Finally, the algorithm computes the maximum feasible step size α that keeps the next iteration in N (γ). Based on the aforementioned discussion, we can now outline second order Mehrotra-type predictor-corrector algorithm as Algorithm 1. Algorithm 1 Input A proximity parameter γ (0, 1 4 ); an accuracy parameter ε>0; a starting point (X 0,y 0,S 0 ) N (γ). begin while X S ε do Compute the scaling matrix P =(X 1 (X 1 SX 1 ) 1 X 1 ) 1. begin Predictor step Solve (10) and compute the maximum step α a such that (X(α a ),y(α a ), S(α a )) F; end begin Corrector step If α a 0.1, then solve (11) with σ =(1 α a ) 3 and compute the maximum step size α such that (X(α),y(α),S(α)) N (γ); If α a γ 3, then solve (11) with σ = γ 3n 3 (1 γ) and compute the maximum step size α such that (X(α),y(α),S(α)) N (γ); end
6 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1113 else Solve (11) with σ = such that (X(α),y(α),S(α)) N (γ); end set (X, y, S) =(X(α),y(α),S(α). end end 3 Complexity Analysis of the Algorithm γ (1 γ) and compute the maximum step size α In this section, we present the complexity proof for Algorithm 1. To simply the proofs of the main results, we write the third equation of system (11) in the form H( ˆXΔŜ +ΔˆXŜ) =σμ gi H(Δ ˆX a ΔŜa ), (1) where H H I is the plain symmetrization operator and ˆX PXP, Δ ˆX P ΔXP, Ŝ P 1 SP 1, ΔŜ P 1 ΔSP 1. (13) Moreover, in terms of Kronecker product, Equation (1) becomes ÊvecΔ X + F vecδŝ = vec(σμ gi H(Δ X a ΔŜa )), (14) where Ê 1 (Ŝ I + I Ŝ), 1 F ( X I + I X). In [14], Monteiro and Zhang proved that Ê and F are n n symmetric positive semidefinite matrices. Similarly, the third equation of (10) can be rewritten as ÊvecΔ X a + F vecδŝa = vec(h( XŜ)). (15) Using (7) and (13), it is easy to see that for X, S S n ++, one has P(X, S) ={P S n ++ XŜ = Ŝ X}, (16) i.e., we require P to make X and Ŝ to commute after scaling, implying that XŜ is symmetric, as long as X and S are both symmetric. This requirement on P also guarantees that Ê and F commute. These properties play a crucial role in the proof of the following technical lemmas. We need to find a lower bound for the maximum step size α in the corrector step in order to establish the iteration of Algorithm 1. The following lemmas are needed to derive a lower bound on the size of centering step. Lemma 3.1 Suppose that (X, y, S) S++ n Rm S++ n, (ΔXa, Δy a, ΔS a ) is the solution of (10), and(δx, Δy, ΔS) is the solution of (11). Then H P (X(α)S(α)) = (1 α)h P (XS)+ασμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a ) +α 4 H P (ΔXΔS), (17) μ g (α) =(1 α + α σ)μ g. (18)
7 1114 MINGWANG ZHANG Proof By Equation (8), we have X(α)S(α) =(X + αδx a + α ΔX)(S + αδs a + α ΔS) = XS + α(xδs a +ΔX a S)+α (XΔS +ΔXS +ΔX a ΔS a ) + α 3 (ΔX a ΔS +ΔXΔS a )+α 4 ΔXΔS. Applying the linearity of H p ( ) to this equality, and noticing the third equations of (10) and (11), we obtain H P (X(α)S(α)) = H P (XS)+αH P (XΔS a +ΔX a ΔS)+α H P (XΔS +ΔXS +ΔX a ΔS a ) +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS) = H P (XS) αh P (XS)+α σμ g I α H P (ΔX a ΔS a )+α H P (ΔX a ΔS a ) +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS) =(1 α)h P (XS)+α σμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS). Using (9) and identity Tr(H p (M)) = Tr(M), we have X(α)S(α) =Tr[(1 α)h P (XS)+α σμ g I + α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS)] =(1 α)tr[h P (XS)] + α σμ g n + α 3 Tr[H P (ΔX a ΔS)] + α 3 Tr[H P (ΔXΔS a )] +α 4 Tr[H P (ΔXΔS)] =(1 α)x S + α σμ g n + α 3 ΔX a ΔS + α 3 ΔX ΔS a + α 4 ΔX ΔS (19) Using the first two equations of (10) and (11) and the fact that (X, y, S) is a primal-dual feasible solution, we can conclude that ΔX a ΔS =0,ΔX ΔS a =0andΔX ΔS = 0. Thus dividing (19) by n gives (18). That completes the proof. Lemma 3. Suppose that the current iterate (X, y, S) N (γ) and let (ΔX a, Δy a, ΔS a ) be the solution of (10) and (ΔX, Δy, ΔS) be the solution of (11). Then ( σ H P (ΔX a ΔS) cond(g) σ ) 1 3 n γ 1 μg, 4 ( σ H P (ΔXΔS a ) cond(g) σ ) 1 3 n γ 1 μg, 4 where G = Ê 1 F, cond(g) =λmax (G)/λ min (G). Proof By applying Lemma A. to (15), we obtain ( F Ê) 1 ÊvecΔ X a + ( F Ê) 1 F vecδ Ŝ a +Δ X a ΔŜa = ( F Ê) 1 vec(h( XŜ)). Since P P(X, S), so Ê and F commute, which implies that It follows that ( F Ê) 1 Ê F =(Ê 1 ) 1 = G 1, ( F Ê) 1 F =( Ê F ) = G. G 1 vecδ Xa + G 1 vecδ Ŝ a +Δ X a ΔŜa = ( F Ê) 1 vec(h( XŜ)).
8 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1115 Using Δ X a ΔŜa = 0 and Lemma A.5 with σ =0,wehave By doing the same procedure for the relation (14), one has G 1 vecδ Xa nμ g, (0) G 1 vecδ Ŝ a nμ g. (1) G 1 vecδ X + G 1 vecδ Ŝ +Δ X ΔŜ = ( F Ê) 1 vec(σμg I H(Δ X a ΔŜa )) ( F Ê) 1 vec(σμg I) + ( F Ê) 1 vec(h(δ Xa ΔŜa )) + ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(h(δ Xa ΔŜa )). The upper bound for the first expression of the right hand side follows from Lemma A.1, where A =(ρ(a T A)) 1. ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(σμ g I) = ρ(( F Ê) 1 ) vec(σμ g I) = ρ(( F Ê) 1 ) σμ g I F = 1 σμ g I F 4λ 1 n σ μ g 4γμ = nσ g 4γ μ g. () By Corollary A.7, the upper bound for the second expression can be obtained in the same way as in the proof of (). For the third expression, () and (3) imply From (), (3), and (4), we obtain ( F Ê) 1 vec(h(δx a ΔS a )) cond(g) n μ g 16 γ. (3) ( F Ê) 1 vec(σμg I) ( F Ê) 1 vec(h(δx a ΔS a )) ( nσ 4γ μ g cond(g) n ) 1 μ g cond(g) = σn 3 μg. (4) 16 γ 8γ ( F Ê) 1 vec(σμg I H(Δ X a ΔŜa )) σ 4γ nμ g + cond(g) n μ g + σ cond(g) n 3 μg 16γ 4γ ( σ cond(g) σ ) n μ g 4 γ. (5) Therefore, using Δ X ΔŜ =0,wehave G 1 vecδ X cond(g) ( σ ) 1 n μg γ, (6) 16 + σ 4 G 1 vecδ Ŝ ( σ cond(g) σ ) 1 μg n 4 γ. (7)
9 1116 MINGWANG ZHANG Finally, from Lemma A.3, (0), and (7), we obtain H P (ΔX a ΔS) F = H I (Δ X a ΔŜ) F Δ X a F ΔŜ a F = vec(δ X ) vec(δŝ) cond(g) G 1 vec(δ Xa ) G 1 vec(δ Ŝ) ( σ cond(g) σ ) n μ g, 4 γ and analogously one has the second statement of the lemma, which completes the proof. Lemma 3.3 Let a point (X, y, S) N (γ) and P P(X, S) be given, and define G Ê 1 F. Then the Newton step corresponding system (11) satisfies ( H P (ΔXΔS) F (cond(g)) 3 σ σ ) n μ g 4 γ. Proof The proof is analogous to the proof of Lemma 3.. Lemma 3.4 (see [13], Lemma 3.6) Let P be the NT scaling and t be defined as follows { u T H P (ΔX a ΔS a } )u t =max u =1 u T. (8) H P (XS)u Then t satisfies t 1 4. Theorem 3.5 Suppose that the current iteration (X, y, S) N (γ) and (ΔXa, Δy a, ΔS a ) is the solution of (10) and (ΔX, Δy, ΔS) is the solution of (11) with σ =(1 α a ) 3.Then,for α a satisfying ( ) 1 γt 3 α a < 1 (9) 1 γ with t defined by (8), the algorithm always takes a step with positive step size in the corrector step. Proof The goal is to determine maximum step size α (0, 1] such that By Lemma A.4., it is equivalent to λ min [X(α)S(α)] γμ g (α). λ min [H P (X(α)S(α))] γμ g (α), (30) where P P(X, S). By (17) and the fact that λ min ( ) is a homogeneous function on the space of symmetric matrix [15], it follows that λ min (H P (X(α)S(α))) = λ min ((1 α)h P (XS)+α 3 H P (ΔX a ΔS a ) α 3 H P (ΔX a ΔS a )+α σμ g I +α 3 H P (ΔX a ΔS +ΔXΔS a )+α 4 H P (ΔXΔS)) α σμ g + λ min ((1 α)h P (XS) α 3 H P (ΔX a ΔS a )) +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)).
10 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1117 Let Q(α) =(1 α)h P (XS) α 3 H P (ΔX a ΔS a ). Since Q(α) is symmetric, so we have λ min (Q(α)) = min u =1 ut Q(α)u. Therefore, there is a vector ū with ū = 1, such that λ min (Q(α)) = ū T Q(α)ū, which implies λ min (H P (X(α)S(α))) α σμ g +ū T [(1 α)h P (XS) α 3 H P (ΔX a ΔS a )]ū +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)). The fact that H P (XS) is positive definite and Tr(H P (ΔX a ΔS a )) = 0 imply t 0 in (8) and thus it follows that which enables us to derive u T H P (ΔX a ΔS a )u tu T H P (XS)u, u, u =1, λ min (H P (X(α)S(α))) α σμ g +(1 α)ū T H P (XS)ū α 3 tū T H P (XS)ū +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) α σμ g +(1 α α 3 t)ū T H P (XS)ū +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) α σμ g +(1 α α 3 t)λ min (H P (XS)) +α 3 [λ min (H P (ΔX a ΔS a )+λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)), where the last inequality follows for (1 α α 3 t) 0. Thus, using the fact that μ g (α) = (1 α + α σ)μ g, (30) holds whenever α σμ g +(1 α α 3 t)λ min (H P (XS)) +α 3 [λ min (H P (ΔX a ΔS a )) + λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) γ(1 α + α σ)μ g. (31) The worst case for the inequality (31) happens when λ min (H P (XS)) = λ min (XS) = γμ g, λ min (H P (ΔX a ΔS a ))+λ min (H P (ΔX a ΔS))+λ min (H P (ΔXΔS a )) < 0andλ min (H P (ΔXΔS)) < 0, so one has to have α σμ g +(1 α α 3 t)γμ g >γ(1 α + α σ)μ g or It is sufficient to have (1 γ)(1 α a ) 3 αtγ > 0. (1 γ)(1 α a ) 3 γt > 0.
11 1118 MINGWANG ZHANG This definitely holds whenever ( ) 1 γt 3 α a < 1, 1 γ which completes the proof. Similarly as in [1] for LO, we let α a =1 ( ) 1 γ 3 (1 γ) whenever the maximum step size in the corrector step is below certain threshold. In the following theorem we give the lower bound for the maximum step size in the corrector step for this specific choice. Note also that for α a =1 ( ) 1 γ 3 (1 γ),byusingσ =(1 α a ) 3 one has σ = γ (1 γ). The following two corollaries which follow from Lemmas 3. and 3.3 give explicit upper bound for the specific σ. Corollary 3.6 Let σ = γ (1 γ),where0 γ 1,andP is the NT scaling. Then H P (ΔX a ΔS) 1 n 3 1 γ μ g and H P (ΔXΔS a ) 1 n 3 1 γ μ g. Proof Using σ = γ (1 γ) and Lemma 3., we can derive H P (ΔX a ΔS) 1 cond(g)n 3 1 γ μ g and H P (ΔXΔS a ) 1 cond(g)n 3 1 γ μ g. Since P is the NT scaling, so we have X = Ŝ and consequently Ê = F, which implies cond(g) = 1. This completes the proof. Corollary 3.7 Let σ = γ (1 γ),where0 γ 1,andP is the NT scaling. Then H P (ΔXΔS) 1 n 4 γ μ g. Proof The proof is analogous to the proof of Lemma 3.6. Theorem 3.8 Suppose that the current iteration (X, y, S) N (γ) and (ΔXa, Δy a, ΔS a ) is the solution of (10) and (ΔX, Δy, ΔS) is the solution of (11) with σ = γ (1 γ).then α γ 3. 3n 3 Proof The goal is to determine maximum step size α (0, 1] in the corrector step such that (30) holds. Following the similar analysis of the previous theorem it is sufficient to have (1 α)λ min (H p (XS)) + α σμ g +α 3 [λ min (H P (ΔX a ΔS)) + λ min (H P (ΔXΔS a ))] +α 4 λ min (H P (ΔXΔS)) γ(1 α + α σ)μ g.
12 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 1119 Using Corollaries 3.6 and 3.7, it is sufficient to have or (1 α)γμ g + α σμ g α 3 n 3 1 μ g 1 n α4 γ 4 γ μ g γ(1 α + α σ)μ g 1 γ 1 γ n 3 α n 4γ α 0. This inequality definitely holds for α = γ 3 3n 3, which completes the proof. Now, we are ready to give the iteration-complexity of Algorithm 1. Theorem 3.9 Algorithm 1 stops after at most O (n 3 X 0 S 0 ) log ε iterations with a solution for which X S ε. Proof If α a 0.1 andα γ 3, then using (18) we obtain If α a 0.1 andα< γ 3 3n 3 3n 3 μ g (α) =(1 α + α σ)μ g,then Finally, if α a < 0.1, then one has This completes the proof. 4 Conclusions μ g (α) =(1 α + α σ)μ g μ g (α) =(1 α + α σ)μ g ( 1 γ ) 3 μ 5n 3 g. ( 1 γ ) 3 ( 3γ) μ 6(1 γ)n 3 g. ( 1 γ ) 3 ( γ) μ 6(1 γ)n 3 g. In the paper, we have extended the recently proposed second order Mehrotra-type predictorcorrector algorithm of Salahi and Amiri [1] to SDO and derived the iteration bound for the algorithm, namely, O(n 3 log X 0 S 0 ε ), which is the same iteration bound as in the LO case. By slightly modifying the algorithm, we can easily obtain the generalization of the modified version of Salahi [1], and the iteration-complexity of the modified version is improved to O(n log X0 S 0 ε ). Hence, the details are omitted here. Some interesting topics remain for further research. Firstly, the search directions used in this paper are based on the NT-symmetrization scheme. It may be possible to design similar algorithms using other symmetrization schemes and still obtain polynomial-time iteration bounds. Secondly, the extension to SOCO and the general convex optimization deserve to be investigated. Furthermore, numerical test is an interesting-topic for investigating the behavior of the algorithm so as to be compared with other approaches.
13 110 MINGWANG ZHANG References [1] N. K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 1984, 4: [] Y. Ye, Interior Point Algorithms, Theory and Analysis, Wiley, UK, [3] S. Boyd, L. EI Ghaoui, E. Fern, et al., Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, [4] F. Alizadeh, Interior point methods in semidefinite programming with applications to combinatorial optimization, SIAM Journal on Optimization, 1995, 5: [5] Y. E. Nesterov and A. S. Nemirovsky, Interior Point Methods in Convex Programming: Theory and Applications, SIAM, Philadelphia, PA, [6] H. Wolkowicz, R. Saigal, and L. Vandenberghe, Handbook of Semidefinite Programming: Theory, Algorithms, and Applications, Kluwer Academic publishers, Dordrecht, The Netherlands, 000. [7] E. de Klerk, Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications, Kluwer Academic Publishers, Dordrecht. The Netherlands, 00. [8] J. Czyayk, S. Mehrotra, M. Wagner, et al., PCx: An interior-point code for linear programming, Optimization Methods and Software, 1999, 11/1: [9] Y. Thang, Solving large-scale linear programmes by interior point methods under the Matlab environment, Optimization Methods and Software, 1999, 10: [10] CPLEX: ILOG Optimization, [11] M. Salahi, J. Peng, and T. Terlaky, On Mehrotra-type predictor-corrector algorithms, Technical Report 005/4, Advanced Optimization Lab., Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada. [1] M. Salahi and N. M. Amiri, Polynomial time second order Mehrotra-type predictor-corrector algorithms, Applied Mathematics and Computation, 006, 183: [13] M. H. Koulaei and T. Terlaky, On the extension of a Mehrotra-type algorithm for semidefinite optimization, Technical Report 007/4, Advanced optimization Lab., Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada. [14] R. D. C. Monteiro and Y. Zhang, A unified analysis for a class of long-step primal-dual pathfollowing interior-point algorithms for semidefinite programming, Mathematical Programming, 1998, 81: [15] R. A. Horn and R. J. Charles, Matrix Analysis, Cambridge University Press, UK, [16] Y. Zhang, On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming, SIAM Journal on Optimization, 1998, 8: [17] F. Alizadeh, J. A. Haeberly, and M. Dverton, Primal-dual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results, SIAM Journal on Optimization 1998, 8: [18] C. Helmberg, F. Rendl, R. J. Vanderdei, et al., An interior-point method for semidefinite programming, SIAM Journal on Optimization, 1996, 6: [19] M. Kojima, M. Shindoh, and S. Hara, Interior point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, SIAM Journal on Optimization, 1997, 7: [0] R. D. C. Monteiro, Primal-dual path-following algorithms for semidefinite programming, SIAM Journal on Optimization, 1997, 7: [1] Y. E. Nesterov and M. J. Todd, Self-scaled barriers and interior-point methods for convex programming, Mathematics of Operations Research, 1997, : 1 4. Appendix The following results introduced in [14] are used during the analysis. Lemma A.1 Let λ 1 be the smallest eigenvalue of the matrix XŜ. Then for any P
14 A SECOND ORDER PREDICTOR-CORRECTOR ALGORITHM FOR SDO 111 P(X, S) one has ρ(( F Ê) 1 )= 1. 4λ 1 Lemma A. Let u, v, r R n and E,F R n n satisfy Eu + Fv = r. If FE T S++, n then (FE T ) 1 Eu + (FE T ) 1 Fv +u T v = (FE T ) 1 r. Lemma A.3 For any u, v R n and G S++ n, we have u v cond(g) cond(g) G 1/ u G 1/ v ( G 1/ u + G 1/ v ). Let the spectrum of XS be {λ i : i =1,,,m}. Then following lemma holds. Lemma A.4 Suppose that (X, y, S) S++ n R m S++, n P S++, n andq P(X, S). Then λ min [H P (XS)] λ min [XS]=λ min [H Q (XS)]. Lemma A.5 Let P P(X, S) be given. Then ) ( F Ê) 1/ vec(σμi H( XŜ)) (1 σ + σ nμ g. γ Lemma A.6 Let (X, y, S) N (γ) and P P(X, S) be given, and define G = Ê 1 F. Then the Newton step corresponding to system (6) satisfies ) cond(g) H P (ΔXΔS) F (1 σ + σ nμ g. γ Corollary A.7 If we set σ =0in Lemma A.6, then the search direction in the predictor step satisfies cond(g) H P (ΔX a ΔS a ) F nμ g.
Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More information2 The SDP problem and preliminary discussion
Int. J. Contemp. Math. Sciences, Vol. 6, 2011, no. 25, 1231-1236 Polynomial Convergence of Predictor-Corrector Algorithms for SDP Based on the KSH Family of Directions Feixiang Chen College of Mathematics
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationא K ٢٠٠٢ א א א א א א E٤
المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity
More informationw Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications
HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationNew stopping criteria for detecting infeasibility in conic optimization
Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:
More informationA WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationInterior Point Methods for Nonlinear Optimization
Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School
More informationInterior-point algorithm for linear optimization based on a new trigonometric kernel function
Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationPrimal-dual path-following algorithms for circular programming
Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationDEPARTMENT OF MATHEMATICS
A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationAN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING
J Syst Sci Complex (2010) 23: 822 829 AN EQUVALENCY CONDTON OF NONSNGULARTY N NONLNEAR SEMDEFNTE PROGRAMMNG Chengjin L Wenyu SUN Raimundo J. B. de SAMPAO DO: 10.1007/s11424-010-8057-1 Received: 2 February
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationApproximate Farkas Lemmas in Convex Optimization
Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationIdentifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin
Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff
More informationA tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes
A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University
More informationSemidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry
Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition
More informationOn the Sandwich Theorem and a approximation algorithm for MAX CUT
On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationResearch Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method
Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction
More informationIntroduction to Semidefinite Programs
Introduction to Semidefinite Programs Masakazu Kojima, Tokyo Institute of Technology Semidefinite Programming and Its Application January, 2006 Institute for Mathematical Sciences National University of
More informationThe Simplest Semidefinite Programs are Trivial
The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12
More informationUsing Schur Complement Theorem to prove convexity of some SOC-functions
Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationA Simpler and Tighter Redundant Klee-Minty Construction
A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationIMPLEMENTATION OF INTERIOR POINT METHODS
IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationPRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING
PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING MASAKAZU MURAMATSU AND ROBERT J. VANDERBEI ABSTRACT. In this paper, we give an example of a semidefinite programming problem in which
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationA new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization
A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationPrimal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725
Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T
More informationarxiv: v1 [math.oc] 26 Sep 2015
arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationNew Interior Point Algorithms in Linear Programming
AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationA Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems
A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationCONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN. Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren
CONVEX OPTIMIZATION OVER POSITIVE POLYNOMIALS AND FILTER DESIGN Y. Genin, Y. Hachez, Yu. Nesterov, P. Van Dooren CESAME, Université catholique de Louvain Bâtiment Euler, Avenue G. Lemaître 4-6 B-1348 Louvain-la-Neuve,
More informationExploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationIMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS
IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment
More informationAn Interior-Point Method for Approximate Positive Semidefinite Completions*
Computational Optimization and Applications 9, 175 190 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interior-Point Method for Approximate Positive Semidefinite Completions*
More informationDeterministic Methods for Detecting Redundant Linear. Constraints in Semidefinite Programming
Deterministic Methods for Detecting Redundant Linear Constraints in Semidefinite Programming Daniel Stover Department of Mathematics and Statistics Northen Arizona University,Flagstaff, AZ 86001. July
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More information18. Primal-dual interior-point methods
L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric
More informationOn implementing a primal-dual interior-point method for conic quadratic optimization
On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More information