A Polynomial Time Constraint-Reduced Algorithm for Semidefinite Optimization Problems, with Convergence Proofs.

Size: px
Start display at page:

Download "A Polynomial Time Constraint-Reduced Algorithm for Semidefinite Optimization Problems, with Convergence Proofs."

Transcription

1 A Polynomial Time Constraint-Reduced Algorithm for Semidefinite Optimization Problems, with Convergence Proofs. Sungwoo Park and Dianne P. O Leary February 18, 2015 Abstract We present an infeasible primal-dual interior point method for semidefinite optimization problems, making use of constraint reduction. We show that the algorithm is globally convergent and has polynomial complexity, the first such complexity result for primal-dual constraint reduction algorithms for any class of problems. Our algorithm is a modification of one with no constraint reduction due to Potra and Sheng 1998 and can be applied whenever the data matrices are block diagonal. It thus solves as special cases any optimization problem that is a linear, convex quadratic, convex quadratically constrained, or second-order cone problem. Keywords: semidefinite programming, semidefinite optimization, interior point methods, constraint reduction, primal-dual interior point method, primal dual infeasible, polynomial complexity, linear programming, linear optimization, quadratic programming, quadratic optimization, second-order cone optimization, secondorder cone programming. AMS Classification: 90C22, 65K05, 90C51 1 Introduction In this work, we propose a new infeasible primal-dual predictor-corrector interior point method IPM with adaptive criteria for constraint reduction. The algorithm is a modification of one with no constraint reduction, due to Potra and Sheng [34]. Our algorithm can be applied when the data matrices are block This work was supported by the US Department of Energy under grants DESC and DESC We are very grateful to André Tits for careful reading of the manuscript, many suggestions, and insightful comments that helped shape the choice of active and inactive blocks, to Florian Potra for helpful discussions. KCG holdings, 545 Washington BLVD, Jersey City, NJ 07310, USA. swpark81@gmail.com Computer Science Department and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, USA. oleary@cs.umd.edu 1

2 diagonal. We verify its validity by proving global convergence. We also prove its polynomial complexity for a given convergence tolerance and an initial residual. IPMs tend to require fewer iterations than active set methods like the simplex algorithm but require many more computations per iteration. Thus constraint reduction, which can substantially reduce the work per iteration, is an important tool in IPMs. Prior work on constraint reduction begins with linear programming LP. For example, Dantzig and Ye [5], Tone [39], Kaliski and Ye [20], and Hertog et al. [13] have applied different constraint reduction schemes to LP. More recently, Tits, Absil, and Woessner [37] introduced new constraint-reduced LP versions of a primal-dual affine-scaling method rpdas and of Mehrotra s predictor-corrector method rmpc. In contrast to previous constraint reduction schemes, their method adaptively updates the working set without any backtracking. They proved global convergence and quadratic local convergence of rpdas under a nondegeneracy assumption, but polynomial complexity was not proved. Later, Winternitz et al. [41] proved global convergence of an rmpc, relaxing the assumptions of [37]. Adaptive constraint reduction also has been applied to other optimization problems. Jung, O Leary, and Tits [18] proposed constraint reduction for training support vector machines SVM, and Williams [40] improved the efficiency of the SVM training by applying a preconditioner. Jung, O Leary, and Tits [19] developed a constraint-reduced affine-scaling method for convex quadratic programming QP, and verified its global convergence and local quadratic convergence. For semidefinite programming SDP, many studies focused on primal-dual IPM. Applying Newton s method to the central path equation results in nonsymmetric directions. To symmetrize, different search directions have been proposed. Helmberg et al. [12], Kojima et al. [23], and Monteiro [24] suggested the HKM direction. Alizadeh, Haeberly, and Overton [1] introduced the AHO direction, and Nesterov and Todd [27, 28] proposed the NT direction. Later, Monteiro and Zhang [24, 26, 42] noticed that these methods all had the form symm PXZP 1 = µi, where P = Z 1/2 for the HKM direction, P = I for the AHO direction, and P T P = Z 1/2 Z 1/2 XZ 1/2 1/2 Z 1/2 for the NT direction. This has become known as the Monteiro-Zhang MZ family of directions. Alizadeh et al. [2] also investigated various directions in a unified framework. Convergence of primal-dual IPMs for SDP has also been intensely studied. Monteiro [25] developed a short-step path-following algorithm using an MZ search direction in a predictor-corrector algorithm, establishing polynomial convergence. Potra and Sheng [34] proposed a primal-dual infeasible predictorcorrector algorithm using P = X 1/2 among the MZ family search directions, and proved polynomial global convergence and local superlinear convergence. Later, Kojima et al. [21] relaxed one of the assumptions of [34] and suggested a modified algorithm with superlinear convergence, repeating corrector steps until 2

3 the iterate converges tangentially to the central path. Potra et al. [33] proved superlinear convergence of the modified algorithm without the nondegeneracy assumption needed by [21]. Kojima et al. [22] proposed a predictor-corrector algorithm using the AHO direction, with quadratic convergence under a nondegeneracy assumption but without repeating corrector steps. Inspired by the good centering effect of the AHO direction, Potra and Sheng [32] and Ji et al. [17] proposed a superlinearly converging algorithm using the HKM direction in the predictor step and a MZ direction with a bounded scaling matrix. In this study, we extend constraint reduction to a primal-dual infeasible predictor-corrector method of Potra and Sheng [34] for block diagonal SDP problems. The most computationally intensive step in an IPM for SDP is the calculation of the Newton direction. By ignoring unnecessary constraints, we show how to reduce the computational load for constructing the Schur complement matrix that determines this direction, thus reducing the cost of each iteration. Many important classes of problems have block diagonal form: LP, QP, quadratically constrained quadratic programming QCQP, second-order cone programming SOCP, etc. See, for example, [4]. From this point of view, our study generalizes [37, 41, 19]. Block diagonal SDPs also arise from relaxations of many important problems with discrete variables. These problems include the maximum binary code problem [35], the traveling salesman s problem [7], the kissing number problem [3], and the quadratic assignment problem [43]. Our work is motivated by numerical experiments [31, Chap. 4] in which we modified SDPT3 4.0, 1 written by Toh, Todd, and Tütüncü [38], to incorporate constraint reduction. Constraint reduction can be quite effective, saving up to 25% of the computational work on a binary code problem, but convergence was quite sensitive to the choice of the parameter controlling the reduction. Therefore, here, modifying and refining an algorithm in [31, Chap. 5], we develop rigorous criteria for reduction, guaranteeing convergence. Our paper is structured as follows. In Section 2 we present our algorithm. We prove global convergence and polynomial complexity in Section 3. Section 4 summarizes results and open questions. 2 Interior Point Methods for SDP WediscusshowstandardIPMsfindanoptimalsolutionofSDP.Formoredetails, see, for example, [6, 16]. We make use of the definitions in Table 1. The primal and dual SDP problems are as follows: Primal SDP: min C X s.t. A i X = b i for i = 1,...,m, X 0, 1 X m Dual SDP: max y,z bt y s.t. y i A i +Z = C, Z 0, 2 1 The Matlab package is available inhttp:// 3

4 S n S+ n S++ n X 0 X 0 A B = tr AB T the set of n n symmetric matrices the set of n n symmetric positive semidefinite matrices the set of n n symmetric positive definite matrices a positive definite matrix a positive semidefinite matrix the dot-product of matrices µ = X Z/n the duality gap x = vecx the vectorization of a given matrix X, a stack of columns of X T matx the inverse of vecx symmx = 1 2 X+XT the symmetric part of X G H Kronecker product of matrices G and H A the 2-norm of a matrix A A F = ij a2 ij 1/2 the Frobenius norm of a matrix A Table 1: Notation for the SDP. where C S n, A i S n, X S n, and Z S n. We focus on problems in which the matrices A i and C are block diagonal: A i1 0 C 1 0 A i =..., C =..., 0 A ip 0 C p where A ij,c j S nj for i = 1,...,m and j = 1,...,p. By the block structure of A i and C, we can also partition the parameters X and Z into X j s and Z j s. This is because nonzero elements outside of the diagonal blocks of Z violate the dual constraint of 2, and nonzero elements outside of the diagonal blocks in X do not make any contribution to minimize the primal objective value C X. Throughout our work, we assume the Slater condition. Assumption 2.1 Slater condition. There exists a primal and dual feasible point X,y,Z such that X 0 and Z 0. Under Assumption 2.1, the primal and dual SDP problems have optimal solutions with equal optimal values; see, for example, de Klerk [6, Theorem 2.6 in p.33]. 4

5 2.1 The HKM Newton Direction Under Assumption 2.1, X,y,Z is an optimal solution if and only if A i X = b i for i = 1,...,m, 3 m y i A i +Z = C, 4 XZ = 0, 5 X 0, Z 0. 6 So, given a current iterate X,y,Z, we solve the following equations A i Ẋ = r pi for i = 1,...,m, 7 m y i A i + Z = R d, 8 X Z+ ẊZ = R c, 9 X = symm Ẋ, 10 where the primal, dual, and complementarity residuals are defined by r pi := b i A i X for i = 1,...,m, 11 m := C Z y i A i, 12 R d R c := µi XZ, 13 and µ defines the current target point on the central path. The solution of the equations is called the HKM direction, named after Helmberg et al., Kojima et al., and Monteiro [12, 23, 24]. Kojima et al.[23, Theorem 4.2] proved that the equations above have a unique solution for X,Z S n ++ S n ++. Monteiro [24] showed that we can obtain the same direction without the symmetrization step 10 by solving A i X = r pi for i = 1,...,m, 14 m y i A i + Z = R d, 15 symm Z 1/2 X Z+ XZZ 1/2 = µi Z 1/2 XZ 1/2. 16 Monteiro [24, Lemma 2.1ff] proved that the solution of 7-10 is the unique solution of So, we frequently refer to 16 for convergence analysis later. 5

6 2.2 Constraint Reduction and Inactive Blocks In the following equations, we make frequent use of identities involving vectorization and the Kronecker product, in particular, vecknl = K L T vecn, K LN Q = KN LQ. LetA R m n2 bethematrixwithithrowequalto veca i T, i = 1,...,m, and recall from Table 1 that we use a lowercase letter for a row-ordered vectorization of a given matrix. Using Gauss elimination, equations 7-10 can be reduced to M y = g, where the Schur complement matrix M = AX Z 1 A T and g = r p +AX Z 1 r d AI Z 1 r c. After solving the Schur complement equation, we compute z = r d A T y, 17 ẋ = X Z 1 A T y r d +I Z 1 r c. 18 Using the block structure of A i and C, the Schur complement matrix M can be computed as p M = M j, where j=1 M j := A j X j Z 1 j A T j, and A j := veca 1j T. veca mj T R m n2 j. Hence, each element M j lh of M j, where 1 l m, l h m, 1 j p, can be computed as M j lh = X j A lj Z 1 j A hj. 19 If A ij is dense 2, then the cost of computing the entire Schur complement matrix M, including Cholesky factorization of Z j, is p 4m+1/3n 3 j +2m2 n 2 j j=1 operations. 20 This is the most expensive computation in the algorithm. Our goal is to drop matrices M j that do not play important roles in M, reducing the computational 2 Refer to Fujisawa, Kojima, and Nakata [9] to see how to exploit the sparsity of A ij. 6

7 cost. We classify the blocks into active and inactive blocks and discuss why the latter can be dropped. From the optimality condition 5, we see that r x +r z n, where r x and r z are the ranks of an optimal solution X and Z. This implies that there may exist blocks X j and Z j such that X j = 0 and Z j has full rank, so Z j 0 and Z j is in the interior of the semidefinite cone. For such a block, X j Z j 1 = 0. Given that the algorithm converges to an optimal solution, X j Z 1 j will approach 0 as the iteration proceeds. In Section 2.3, we define thresholds on X j Z 1 j F allowing us to drop blocks in M while guaranteeing convergence of the algorithm. At a given iteration, we will say that blocks larger than the threshold are active, and that the other blocks are inactive. Without loss of generality, we assume that the first p blocks are active and the remaining p blocks are inactive. We let Âi and Ãi denote the active and inactive blocks of A i, so  i = A i1 0..., à i = A i p , 0 A i p 0 A ip where Âi R n n, à i Rñ ñ, and n = n + ñ. Furthermore, let n j and ñ j denote the size of active and inactive blocks, so that n := p j=1 n j, ñ := Block matrices X, X, Ẑ, Z, R d, R d, and R c, R c are defined similarly. We also define  Rm n2, with rows equal to vecâi T, i = 1,..., p, and p j=1 ñ j. à R m ñ2, with rows equal to vecãi T, i = 1,..., p. Then we can expand M into active and inactive parts as M = M+ M, where M =  X Ẑ 1 ÂT, M = à X Z 1 ÃT. If X Z 1 is small, we expect that M is also negligible and we can omit it when we solve the linear system. Constraint reduction generates an extra term X ǫ in the primal direction, perturbing the complementarity equation. Due to this, a series of lemmas for global convergence by Potra and Sheng [34] need to be modified. The proposed adaptive criteria restrain the magnitude of X ǫ so that we can guarantee that we can take a step long enough to ensure convergence. The constraint-reduced equation is M y = g, 21 or equivalently,  X Ẑ 1 ÂT y = r p +AX Z 1 r d AI Z 1 r c. 22 7

8 Weselect XandẐsothatthe resulting Mhasfull rank, whichisalwayspossible because the equation above has a unique solution if no reduction is done [23, Theorem 4.2]. We discuss checking the rank of M at the end of this section. After solving the equation above, we then compute ẋ and z from [ X ] 0 ẋ = vec 0 X, 23 where z = r d A T y, 24 X := mat X Ẑ 1 ÂT y X Ẑ 1 r d +I Ẑ 1 r c, 25 X := mat X Z 1 r d +I Z 1 r c. 26 Note that while 18 contains X Z 1 A T y as its first term, X in 26 does not have the corresponding term X Z 1 ÃT y, which will cause a perturbation Ẋǫ in the primal direction. In the following lemma, we show that ẋ, z, and y from equations 21, 23, and 24, are a solution of the perturbed equations A ẋ = r p, 27 A T y+ z = r d, 28 X I z+i Z ẋ+ ẋ ǫ = r c, 29 where [ 0 0 Ẋǫ := mat ẋ ǫ = 0 mat X Z 1 ÃT y ]. 30 Note the new vector ẋ ǫ in the second term of 29. Lemma 2.1 Perturbed Newton equations. The solution ẋ, y, z of 21, 23, and 24 satisfies equations Proof. First, we note the primal equation 27 is satisfied since, by 23, A ẋ =  x+ã x =  X Ẑ 1 ÂT y  X Ẑ 1 r d +ÂI Ẑ 1 r c à X Z 1 r d +ÃI Z 1 r c by 25 and 26 =  X Ẑ 1 ÂT y AX Z 1 r d +AI Z 1 r c = r p +AX Z 1 r d AI Z 1 r c AX Z 1 r d +AI Z 1 r c by 22 = r p. 8

9 In addition, 28 is immediately satisfied by 24. To see that 29 is satisfied, we first calculate ẋ + x ǫ. By 23, 25, 26, and 30, Ẋ+ Ẋǫ = X 0 0 X+ mat X Z 1 ÃT y = mat X Z 1 A T y X Z 1 r d +I Z 1 r c, so Thus, ẋ+ ẋ ǫ = X Z 1 A T y X Z 1 r d +I Z 1 r c. I Z ẋ+ ẋ ǫ = I ZX Z 1 A T y I ZX Z 1 r d +I ZI Z 1 r c = X IA T y X Ir d +I Ir c = X IA T y r d +r c = X I z+r c by 24. Therefore, X I z+i Z ẋ+ ẋ ǫ = r c. From equations and Lemma 2.1, we can see that constraint reduction does not affect the primal and dual equations 7 and 8 but only the complementarity equation 9. Furthermore, considering the relations between 7-10 and 14-16, the solution ẋ, y, z of also satisfies the following equations by the symmetrization of X = symm X ǫ = symm Ẋǫ, Ẋ and A x = r p, 31 A T y+ z = r d, 32 Z 1/2 X+ X ǫ Z 1/2 + symm Z 1/2 X Z+ XZZ 1/2 = µi Algorithm SDP:Reduced In this section, we introduce an interior point method, similar to that of Potra and Sheng [34], but including constraint reduction. We define a set F of feasible solutions and a set F of optimal solutions as F := {X,y,Z S+ n Rm S+ n : X,y,Z satisfies 3 and 4. }, F := {X,y,Z F : X Z = 0}. We also define the neighborhood Nγ,τ of the central path as Nγ,τ := {X,Z S n ++ Sn ++ : Z1/2 XZ 1/2 τi F γτ}. 9

10 In the predictor step, given the currentpossibly infeasible iterate X, y, Z and inactive blocks X, Z of X,Z, we find a solution X, y, Z of 31-33, setting µ = 0. We refer to the resulting equations as 31P-33P and set [ ] 0 0 X ǫ = symm 0 mat X Z ÃT y We then compute an updated point X,y,Z by taking a step of length θ < 1 in this direction. In the corrector step, we set the target duality gap µ = 1 θτ, where the parameter τ decreases at each iteration. Then, with inactive blocks X, Z of X,Z, we find a solution X, y, Z of with r p = 0 and r d = 0. We refer to the resulting equations as 31C-33C and set [ ] 0 0 X ǫ = symm 0 mat X Z ÃT y We denote the directions norms as δ := 1 τ Z1/2 X ZZ 1/2 F, 36 δ x := Z 1/2 XZ 1/2 F, 37 δ z := τ Z 1/2 ZZ 1/2 F, 38 δ ǫ := 1 τ Z1/2 X ǫ Z 1/2 F, 39 δ ǫ := 1 τ Z1/2 X ǫ Z 1/2 F. 40 We use two fixed positive parameters α and β with the property β 2 21 β 2 < α < β β 1 β This inequality restrains the ranges of α and β as < < α < β < For example, we can choose α,β = 0.17,0.3. Based on these parameters, we define θ and θ which change at each iteration as θ := α β δ ǫ+ α β δ ǫ 2 +4δβ α 2δ 2β α = β α+δǫ 2 +4δβ α +β α+δ ǫ, 43 θ := max{ θ [0,1] :X+θ X,y+θ y,z+θ Z Nβ,1 θτ, θ [0, θ]}

11 In our predictor-corrector algorithm, specified below, at each step we incrementally build a set of active blocks that satisfies one of the following two conditions, the first for a predictor step and the second for a corrector step. Condition 2.1. δ ǫ q τ δ x, 45 or equivalently Z 1/2 X ǫ Z 1/2 F q Z 1/2 XZ 1/2 F, where the input parameter q of the algorithm has a range of 0 q < Condition 2.2. where δ ǫ < 1 θ s 2 +t s, 47 s := β 2 β +1, t := 2α1 β 2 β Based on these parameters and conditions, we define our primal-dual predictorcorrector algorithm, Algorithm SDP:Reduced. Before analyzing the algorithm, the following overview is useful. 1. For step 2, an appropriate choice of ρ is discussed by Toh, Todd and Tütüncü [38, Section 3.4]. Convergence holds for any ρ > 0, but we follow [34, Thm. 3.8] in our constraint on ρ. A smaller ρ increases the size of w in Lemma 3.9 below, slowing convergence. In practice, if a bound on max X, Z is not known and convergence is slow, the iteration can be restarted with a larger ρ. 2. In step 3d, the choice of step length in the predictor step is valid only when θ θ, which will be proved in Lemma It may not be practical to compute θ. Instead, θ can be defined by 43 or computed by line search. 4. In step 3e, the algorithm terminates since X,y,Z is an optimal solution, which will be shown in Lemma Since r p = 0 and r d = 0 in the corrector step, the corrector step s only purpose is to move the point toward the central path. 6. By 43, θ is a decreasingfunction of δ ǫ. Thus, there is a trade-offbetween the allowance for constraint reduction and the step length in the predictor step. 7. The predictor step moves the point from Nα,τ into Nβ,1 θτ Lemma 3.2, and the corrector moves it into Nα,1 θτ Lemma

12 Algorithm SDP:Reduced: Primal-Dual Infeasible Constraint-Reduced Predictor-Corrector Algorithm for Block Diagonal SDP 1. Input : A, b, C; α and β satisfying 41; convergence tolerance τ ; q for the perturbation bound of the primal direction in the predictor step, satisfying Set X 0 = Z 0 = ρi for ρ > max X, Z, and set τ = τ 0 = µ 0 = X 0 Z 0 /n so that X 0,Z 0 Nα,τ Repeat until τ < τ : For k = 0,1,..., a Set X,y,Z = X k,y k,z k and τ = τ k. b Sort the constraint blocks in decreasing order of X j Z 1 j. c Initially, M P = 0. For j = 1,...,p, until M P is full-rank and Condition 2.1 above is satisfied, replace M P by M P +A j X j Z 1 j A T j. Set p = j. d Predictor step: Solve 22 with M = M P and r c = vec XZ for X, y, Z satisfying 31P 33P.Choosea step length θ [ θ, θ] defined by 43 and 44, X = X+θ X, y = y+θ y, Z = Z+θ Z. e If θ = 1, terminate the iteration with optimal solution X,y,Z. f Sort the constraint blocks in decreasing order of X j Z 1 j. g Initially, M C = 0. For j = 1,...,p, until M C is full-rank and Condition 2.2 above is satisfied, replace M C by M C +A j X j Z 1 j A T j. Set p = j. h Corrector step: Solve 22 with M = M C, r p = 0, r d = 0, and r c = vec 1 θτi X Z, for X, y, Z satisfying 31C-33C. Take a full step as X k+1 = X + = X+ X, y k+1 = y + = y+ y, Z k+1 = Z + = Z+ Z. i Set τ k+1 = 1 θτ. j Update r p = b Ax and r d = c z A T y. 8. Condition 2.1 and Condition 2.2 restrict the magnitude of X ǫ and X ǫ, which are the perturbations caused by constraint reduction. 9. X Z 1 = 0whenweusethefullSchurcomplementmatrix,soby34and 35, Condition 2.1 and Condition 2.2 can always be satisfied by making all blocks active. The thresholds are updated dynamically every iteration. In the process of incremental construction of M P, we check that it is full 12

13 rank and satisfies Condition 2.1. To do this, we can solve for y and calculate M 1 X ǫ, which may require the Cholesky factor of M P to compute y = P g. In some casesit is obviousthat M P is rank deficient e.g, if j l=1 n2 l < m for the blocks included or full-rank e.g., when M P is already full-rank before a block is added. In other cases we verify the full-rank condition by checking that the Cholesky factorization exists and then applying an inexpensive condition number estimator [11, 14, 29, 36]. Once M P has full rank, we can use rank-1 updating of the Cholesky factor 3 for M P depending on the size of m and n j. Similar comments hold for M C. We now discuss this updating. Let R Xj and R Zj be Cholesky factors of X j and Z j. Note that the factor R Zj is required to compute M j by 19, regardless of constraint reduction, unless Z 1 j is computed explicitly. Then, the partial Schur complement M j can be written as where M j = A j X j Z 1 j A T j = A j R T X j R Xj R T Z j R Zj 1 A T j = A j R T X j R 1 Z j R Xj R T Z j A T j = H jh T j, 49 H j = A j R T X j R 1 Z j R m n2 j. Thus, h T l, the l-th row of H j, can be computed as h l = vec R Xj A lj R 1 Z j. Furthermore, we can rewrite M j lh in 19 as M j lh = X j A lj Z 1 j A hj = R T X j R Xj A lj R 1 = R T X j R Xj A lj R 1 Z j R T Z j A hj = Z j R T Z j A hj R T X j math l R T Z j A hj. Therefore, H j can be obtained in the process of computing M j with additional computation for the factor R Xj of X j. From 49, we can write the j-th update of M in step 3.c and 3.f in the algorithm as M j = M j 1 +M j = M j 1 +H j H T j. If we already have the Cholesky factor R j 1 of M j 1, the Cholesky factor R j M of M j can be computed by n 2 j rank-1 Cholesky updates [10]. The M term R 1 Z j in H j can be computed column-by-column if space is at a premium, but since the matrices A j require m p j=1 n2 j space, the additional m n2 j 3 Rank-1 modification of Cholesky factor is implemented by schud.f and dchud.f in LINPACK. See Gill et al. [10] and LINPACK documentation [8]. 13

14 memory for H j is not burdensome. Rank-1 update of a Cholesky factor requires 2m 2 + Om flops. Using the updated factor of M, we can compute y = M 1 g = R 1 M R T M g in 2m2 flops. Since we do not need a very accurate y for determining constraint reduction, iterative refinement may not be necessary. Once we finish updating M, the factor R M can be reused as a preconditioner for an iterative method like SYMMLQ [30] to compute y to a high accuracy. In summary, for each update of M, the extra cost is Cholesky factorization of X j n 3 j /3 flops, update of Cholesky factor of M n 2 j 2m2 + Om flops, and computation of y = M 1 g 2m 2 flops. The total, 1 3 n j 3 +2m 2 n 2 j +1+ Omn 2 j, is a reasonable cost for the constraint reduction decision, considering that it takes4m+1/3n 3 j +2m2 n 2 j to computem i by19 and20. An analysis based on memory access rather than floating point operations yields a similar conclusion. Ifm 3 /3 < n 3 j /3+2m2 n 2 j, then wecancomputethe Choleskyfactor R M of M explicitly with no Cholesky factorization of X j and no updating of the factor R M. In that case, it costs m 3 /3+2m 2. 3 Global Convergence of Algorithm SDP:Reduced 3.1 Primal and Dual Residuals and Closeness to Central Path Convergence analysis for our constraint-reduced algorithm SDP:Reduced is based on a series of lemmas similar to those presented by Monteiro [24] and Potra and Sheng [34]. Note that in their lemmas the roles of X and Z in 16 are switched: symm X 1/2 X Z+ XZX 1/2 = µi X 1/2 ZX 1/2. 50 We use 16 rather than 50 because, as Zhang [42] noted, 16 is computationally easy to solve. It also explains how the active and inactive blocks are involved in the Schur complement matrix computation as we described in Section 2.2. However, all of our results can be extended to the algorithm with 50 replacing 16. For Algorithm SDP:Reduced, each component of the primal and dual residuals moves toward zero at each iteration, bringing us closer to feasibility. Lemma 3.1. In Algorithm SDP:Reduced, r + d = 1 θr d and r + p = 1 θr p. Proof. First, let us see how the dual residual changes. By 32P and 32C, z = r d A T y and z = A T y, so r + d = c z+ A T y + = c z+θ z+ z A T y+θ y+ y = c z A T y θ z+a T y z+a T y = r d θ r d = 1 θr d. 14

15 Next, we consider the primal residual. By 31P and 31C, A x = r p and A x = 0, so r + p = b Ax+ = b Ax+θ x+ x = r p θa x A x = r p θr p = 1 θr p. Next weanalyzehowthe iteratemovesrelativetothecentralpathduringthe predictor and corrector steps. Assume that the current point X,Z Nα,τ. The initial point X 0,y 0,Z 0 in the algorithm is perfectly placed on the central path, so this assumption is satisfied. With this assumption, we show in Lemma 3.2 that X,Z Nβ,1 θτ, 51 after the predictor step, and in Lemma 3.3 that X +,Z + Nα,1 θτ, 52 after the corrector step. Some proofs are omitted in this section but supplied in the Appendix, to keep the outline of the argument clear. To show 51, recall that we know from 44 that X+θ X,y+θ y,z+θ Z Nβ,1 θτ, for any θ [0, θ]. We show in Lemma 3.2 that this relation holds for any θ [0, θ], so we conclude that θ θ and therefore the predictor step exists. Lemma 3.2. Similar to [34, Lemma 2.5]. If X,Z Nα,τ then θ θ. In particular, 1. if θ < 1, then X,Z Nβ,1 θτ, so X 0 and Z if θ = 1, then X Z = 0. Proof. See Appendix. Lemma 3.3. Similar to [34, Theorem 2.6]. Suppose that X,Z Nβ,1 θτ in Algorithm SDP:Reduced. Let τ + = 1 θτ. Then, after the corrector step, Proof. See Appendix. X +,Z + Nα,τ +, and Z + 1/2 X + Z + 1/2 1 ατ + I. 15

16 Next, we quantify the bound on the duality gap µ = X Z/n. For the analysis, the following properties of the Frobenius norm and the trace of a matrix are useful. For a matrix E S n, m tre = λ i E m σ i E, where λ i E is the i-th eigenvalue and σ i E is the i-th singular value of E. By the Cauchy-Schwarz inequality, for E S n, n n 2 n E 2 F = n σie 2 σ i E tre 2, so Lemma 3.4. If X,Z Nα,τ, then n E 2 F tre α n τ µ = 1 n X Z 1+ α n τ. Proof. Since Z 1/2 XZ 1/2 τi is symmetric, by 53, 2 n Z 1/2 XZ 1/2 τi 2 F tr Z 1/2 XZ 1/2 τi = tr Z 1/2 XZ 1/2 2 nτ Thus, since X,Z Nα,τ, i.e., = trxz nτ 2 = X Z nτ 2. X Z nτ 2 n Z 1/2 XZ 1/2 τi 2 F nα 2 τ 2, 2 1 n X Z τ 1 n α2 τ 2, and the rest of the proof is straightforward. The theorem below summarizes the convergence properties of SDP:Reduced. Theorem 3.1. At the k-th iteration of SDP:Reduced, τ k, r k p, rk d, and Xk,Z k satisfy τ k = ψ k τ 0, 54 r k p = ψ k r 0 p, 55 r k d = ψ kr 0 d R k d = ψ kr 0 d, 56 X k,z k Nα,τ k, 57 1 α n τ k µ k = 1 n Xk Z k 1+ α n τ k

17 where ψ k := k 1 θ i. Proof. From Lemmas 3.1 Lemma 3.4, we can obtain the results. In order to prove the convergence of r k p, rk d, and µ k to zero, all that remains is to show that the step lengths θ i are bounded away from zero. 3.2 Lower Bound on Step Length In this section, we omit the k in ψ k, r k p, and rk d whenever it is evident in the context, and let X,y,Z denote the k-th iterate of our algorithm. Toshowthatthe steplengthsareboundedawayfromzerotheorem3.2, we need to show that quantities related to X, Z, and δ are bounded Lemmas 3.7 and 3.8. Two preliminary lemmas lead us to these bounds. Lemma 3.5. Similar to [34, Lemma 3.2]. For an initial point X 0,y 0,Z 0 and an optimal solution X,y,Z F, define Then ζ = X0 Z +X Z 0 X 0 Z 0. X Z 0 +X 0 Z nτ 0 2+ζ + α, 59 n 1+α+ψ X Z +X Z +ζ nτ ψ Proof. Let us define X = X ψx 0 1 ψx, y = y ψy 0 1 ψy, Z = Z ψz 0 1 ψz. By 11, 55 and the primal feasibility of X, A i X = b i r pi, ψa i X 0 = ψb i rpi 0 = ψb i r pi, 1 ψa i X = 1 ψb i, 17

18 for i = 1,...,m, and by 12, 56, and the dual feasibility of y,z m y i A i +Z = C R d, m ψ yia 0 i +Z 0 = ψc R 0 d = ψc R d, m 1 ψ yia i +Z Thus, X,y,Z satisfies = 1 ψc. A i X = 0 for i = 1,...,m, m y ia i +Z = 0. Therefore, X Z = m y i A i X = 0, so [X ψx 0 1 ψx ] [Z ψz 0 1 ψz ] = 0. By expanding this equation using X Z = 0, we obtain ψx Z 0 +X 0 Z + 1 ψx Z +X Z = X Z+ψ 2 X 0 Z 0 +ψ1 ψx 0 Z +X Z 0. Since X S+ n,z Sn +,X S+ n,z S+ n and ψ [0,1], this equation gives us two inequalities: ψx Z 0 +X 0 Z X Z+ψ 2 X 0 Z 0 +ψ1 ψx 0 Z +X Z 0, 1 ψx Z +X Z X Z+ψ 2 X 0 Z 0 +ψ1 ψx 0 Z +X Z 0. Because X 0 Z 0 = nτ 0, X Z 1+α/ nψnτ 0 by 58, and ψ1 ψ ψ, we can bound the right hand sides of the inequalities above, X Z+ψ 2 X 0 Z 0 + ψ1 ψx 0 Z +X Z 0 1+α/ nψnτ 0 +ψ 2 nτ 0 +ψ1 ψζnτ 0 = ψ 1+α/ n+ψ +1 ψζ nτ 0. With this upper bound, we can rewrite the inequalities as ψx Z 0 +X 0 Z ψ 1+α/ n+ψ +1 ψζ nτ 0, 1 ψx Z +X Z ψ 1+α/ n+ψ +1 ψζ nτ 0. Therefore, by using ψ [0,1], n 1, and τ = ψτ 0, we can derive 59 and

19 Lemma 3.6. Similar to [34, Corollary 3.3]. X 1/2 Z 0 1/2 F nτ 0 1/2 2+ζ + α n 1/2, 61 Z 1/2 X 0 1/2 F nτ 0 1/2 2+ζ + α n 1/2, 62 X 1/2 F Z 0 1/2 nτ 0 1/2 2+ζ + α n 1/2, 63 Z 1/2 F X 0 1/2 nτ 0 1/2 2+ζ + α n 1/2, 64 X 1/2 Z 1/2 2 = Z 1/2 XZ 1/2 1+ατ, 65 X 1/2 Z 1/2 2 = Z 1/2 X 1 Z 1/2 1 1 ατ. 66 Proof. See Appendix. Recalling the definition of δ, δ x, and δ z in 36 38, δ is bounded by δ = 1 τ Z1/2 X ZZ 1/2 F 1 τ Z1/2 XZ 1/2 F Z 1/2 ZZ 1/2 F = 1 τ 2δ xδ z. 67 Lemma 3.7. Similar to [34, Lemma 3.4]. For X, y, Z F, denote [ T := ψ Z 1/2 X 0 XZ 1/2 + symm Z 1/2 XZ 0 ZZ 1/2] Z 1/2 X+ X ǫ Z 1/2, T x := ψz 1/2 X 0 XZ 1/2, T z := ψz 1/2 Z 0 ZZ 1/2. Then δ x = Z 1/2 XZ 1/2 F T x F + T F 1 α, δ z Proof. See Appendix. = τ Z 1/2 ZZ 1/2 F τ T z F + T F 1 α. Lemma 3.8. For any given X, y, Z F, we have 1 α δ x C x + C 0 τ, 1 α q 1 α 1 α δ z C z + C 0 τ, 1 α q 1 α 2 1 α δ C z + C 2 0, 1 α q 1 α 19

20 where C x := nd 0 2+ζ +α/ n, 68 C z := nd 02+ζ +α/ n, 69 1 α C 0 := 2nd 02+ζ +α/ n + n1+α, and 70 1 α := max X 0 1/2 X 0 XX 0 1/2 F, Z 0 1/2 Z 0 ZZ 0 1/2 F. d 0 Proof. See Appendix. We can now establish global convergence of our algorithm. Theorem 3.2. Algorithm SDP:Reduced is globally convergent, with the SDP optimality conditions 3 6 satisfied in the limit as k. Proof. Since δ x is bounded by Lemma 3.8, so is δ ǫ by 45. Lemma 3.8 bounds δ. Therefore, θ defined by 43 is bounded away from 0. Thus the step length θ [ θ, θ] is bounded away from 0. The result follows from Theorem 3.1. Next we prove that Algorithm SDP:Reduced converges in Onlnǫ 0 /ǫ iterations, the same as the unreduced algorithm of [34], where ǫ 0 = maxx 0 Z 0, r 0 p, r0 d and ǫ is the required tolerance on the resulting residuals. Lemma 3.9. Suppose that X 0 = Z 0 = ρi, X ρ and Z ρ for X,y,Z F and ρ > 0. Then the predictor step length θ k [ θ k, θ k ] satisfies θ k 1 wn, where w = 1+ hq β α + hh+3.5 β α, and h = 13/0.5 q. Proof. See Appendix. Note that if q = 0, then constraint reduction is not performed. In that case, w = /β α 1+29/ β α, and θ has the same lower bound as the unreduced algorithm by [34, Theorem 3.8]. Theorem 3.3. For a given tolerance ǫ on maxx k Z k, r k p, r k d, Algorithm SDP:Reducedconverges in Onlnǫ 0 /ǫ iterations where ǫ 0 = maxnτ 0, r 0 p, r0 d. 20

21 Proof. By Theorem 3.1, we know ǫ k max1+α/ nnτ k, r k p, r k d ψ k max1+α/ nnτ 0, r 0 p, r 0 d ψ k 1+α/ nǫ 0. On the other hand, by the definition of ψ k and Lemma 3.9, Thus, if ψ k = k 1 θ i 1 1 wn k. 1 1 K 1+α/ nǫ 0 ǫ wn after K iterations, then ǫ K ǫ. By taking ln on both sides, Kln 1 1 +ln [ 1+α/ ] nǫ 0 lnǫ wn if and only if Kln 1 1 lnǫ ln [ 1+α/ ] nǫ 0 = lnǫ/ǫ0 ln [ 1+α/ n ] lnǫ/ǫ 0 wn Hence, ǫ K ǫ if By the fact K = Onlnǫ 0 /ǫ. lnǫ 0 /ǫ K ln 1 1. wn 1 ln 1 1 wn, as n, wn 4 Conclusions We proposed an infeasible primal-dual predictor-corrector interior point method with adaptive constraint reduction for block diagonal SDP problems. By adaptively selecting appropriate inactive constraint blocks, we retain global convergence and polynomial complexity, Onlnǫ 0 /ǫ. The polynomial complexity result is the first such result for primal-dual constraint reduced interior-pointmethods for any problem class, and includes LP, QP, QCQP, and SOCP as special cases. Our algorithm computes the Schur complement matrix twice for each iteration. Since most of the practical implementations reuse the Schur complement 21

22 matrix in the corrector step, this is a disadvantage, but future research may remove this restriction. Kojima, Shida and Shindoh [21] showed that an algorithm similar to that of Potra and Sheng [34] has superlinear local convergence if the generated sequence converges tangentially to the central path. They noted that tangential convergence can be achieved by repeating the corrector step of Potra and Sheng until X +,Z + moves into Ngτ k,τ for a given gτ k such that gτ k 0 as k. Similar algorithms using the AHO direction [22] or a carefully bounded scaled direction [32, 17] have been proven to have local superlinear convergence under simpler assumptions and without repeating the corrector. As future work, this approach might lead to a constraint-reduced algorithm with superlinear convergence. A Appendix: Proofs In the proofs, we frequently use properties of the Frobenius norm. For a matrix E R n n, E 2 F = n σi 2 E, E = σ max E, where σ i E is the i-th singular value of E, so E F n E, If E S n, then λ i E σ max E = σmaxe 2 n σi 2E = E F, so E F λ i E E F. 71 Lemma A.1. If M R p p is nonsingular and E R p p has only real eigenvalues, then If E S p, then λ max E λ max symm MEM 1, 72 λ min E λ min symm MEM E F symm MEM 1 F. 74 Proof. See [24, Lemma 3.3] and [34, Lemma 2.2]. Next, we prove that the predictor step stays in a neighborhood of the central path. 22

23 Proof. of Lemma 3.2. Let Xθ = X+θ X and Zθ = Z+θ Z, then Define XθZθ 1 θτi = X+θ XZ+θ Z 1 θτi = 1 θxz τi+θxz+x Z+ XZ +θ 2 X Z. Pθ = Z 1/2 XθZθ 1 θτiz 1/2 = Z 1/2 X+θ XZ+θ Z 1 θτiz 1/2 = Z 1/2 XZ+θ XZ+X Z+θ 2 X Z 1 θτiz 1/2 = 1 θz 1/2 XZ 1/2 τi+θ 2 Z 1/2 X ZZ 1/2 [ +θ Z 1/2 XZ 1/2 +Z 1/2 X Z+ XZZ 1/2]. Then, by 33P, symmpθ = 1 θz 1/2 XZ 1/2 τi+θ 2 symm Z 1/2 X ZZ 1/2 [ +θ Z 1/2 XZ 1/2 + symm Z 1/2 X Z+ XZZ 1/2] = 1 θz 1/2 XZ 1/2 τi +θ 2 symm Z 1/2 X ZZ 1/2 θz 1/2 X ǫ Z 1/2. Thus, since X,Z Nα,τ, and using 36, 39, and 74, we have symmpθ F 1 θ Z 1/2 XZ 1/2 τi F +θ 2 Z 1/2 X ZZ 1/2 F +θ Z 1/2 X ǫ Z 1/2 F 75 = ατ1 θ+θ 2 δτ +θδ ǫ τ = τ δθ 2 +δ ǫ α+βθ +α β +β1 θτ = δτθ θ 1 θ θ 2 +β1 θτ, 76 where θ 1 = α β δ ǫ+ α β δ ǫ 2 +4δβ α, 2δ θ 2 = α β δ ǫ α β δ ǫ 2 +4δβ α. 2δ Since θ = θ 1 by definition 43 of θ and θ 2 < 0, the first term in 76 becomes negative when 0 θ θ, so symmpθ F β1 θτ, θ [0, θ]. 23

24 By 74, with M = Z 1/2 and E = XθZθ 1 θτi, XθZθ 1 θτi F β1 θτ, θ [0, θ]. 77 Note that this implies that X1Z1 = 0 when θ = 1. From this result, if Zθ 1/2 exists for θ [0, θ], then, since. F is invariant under similarity transformation, 77 implies Zθ 1/2 XθZθ 1/2 1 θτi F β1 θτ, θ [0, θ]. 78 To conclude, we show Xθ 0 and Zθ 0 for θ [0, θ] when θ < 1. By continuity, 78then holds for θ = 1, too. Otherwise, theremust existθ [0, θ] such that Xθ Zθ is singular, which implies that λ min Xθ Zθ 1 θ τi 1 θ τ. 79 However, by 73 with M = Z 1/2 and E = Xθ Zθ 1 θ τi, and by 71, λ min Xθ Zθ 1 θ τi λ min symmpθ symmpθ F β1 θ τ, which contradicts 79 since β 0,1. Hence, Xθ 0 and Zθ 0 for θ [0, θ]. To prepare for the proof that the corrector step stays in a neighborhood of the central path, we need two technical lemmas. Lemma A.2. Similar to [24, Lemma 4.4 in p.671 ]. For X,Z Nγ,τ and X, y, Z such that define A i X = 0 for i = 1,...,m, 80 m y ia i + Z = 0, 81 H = symm Z 1/2 X Z + X Z Z 1/2, δ x = Z 1/2 X Z 1/2 F, δ z = τ Z 1/2 Z Z 1/2 F. Then δ x δ z δ x δ z 1 2 δ 2 x +δ 2 z H 2 F 21 γ 2, 82 H F 1 γ, 83 H F 1 γ

25 Proof. See Monteiro [24, Lemma 4.4], in which the roles of X and Z in H are switched. Lemma A.3. Under Condition 2.2, δ ǫ < 1 θ1 2β. Proof. Recall that s = β 2 β +1, t = 2α1 β 2 β 2, by their definitions in Condition 2.2. By Condition 2.2, it suffices to show s2 +t s < 1 2β, or equivalently, since 0 < β < 1/2 and s > 0, s+1 2β 2 s 2 +t 2 > 0. By 41, we have s+1 2β 2 s 2 +t 2 = 1 2β 2 +2s1 2β t = 1 2β 2 +2β 2 β +11 2β 2α1 β 2 +β 2 > 1 2β 2 +2β 2 β +11 2β 2β1 β 2 +β 2 = 6β 3 +15β 2 12β +3 = 31 2ββ 1 2 > 0, β 0,1/2. So, δ ǫ < 1 θ1 2β. We are now ready to establish Lemma 3.3. Proof. of Lemma 3.3. X + Z + 1 θτi = X+ XZ+ Z 1 θτi = X Z 1 θτi+x Z+ X Z+ X Z. Since θ < 1 due to step 3.e in Algorithm SDP:Reduced, we know that X 0 and Z 0 by Lemma 3.2. Thus, we can define P = Z 1/2 X + Z + 1 θτi Z 1/2 = Z 1/2 X+ XZ+ Z 1 θτi Z 1/2 = [Z 1/2 X Z 1/2 1 θτi]+z 1/2 X Z+ X ZZ 1/2 +Z 1/2 X Z Z 1/2. 25

26 By 33C, we have symmp = Z 1/2 X Z 1/2 1 θτi + symm Z 1/2 X Z+ X ZZ 1/2 + symm Z 1/2 X Z Z 1/2 = symm Z 1/2 X Z Z 1/2 Z 1/2 X ǫ Z 1/2. 85 Since the corrector step satisfies 31C - 32C and X,Z Nβ,1 θτ, we can apply Lemma A.2 to X, Z, X, and Z. So, with γ = β and replacing τ with 1 θτ and X,Z, X, Z with X,Z, X, Z, the inequality 82 divided by τ becomes Z 1/2 X Z 1/2 F Z 1/2 Z Z 1/2 F H 2 F 21 β 2 1 θτ, 86 where H = symm Z 1/2 X Z+ X ZZ 1/2. In addition, by 33C, H F By 86 and 87, = symm Z 1/2 X Z+ X ZZ 1/2 F = Z 1/2 X+ X ǫ Z 1/2 1 θτi F Z 1/2 X Z 1/2 1 θτi F + Z 1/2 X ǫ Z 1/2 F β1 θτ +δ ǫ τ since X,Z Nβ,1 θτ. 87 Z 1/2 X Z 1/2 F Z 1/2 Z Z 1/2 F 1 β1 θτ +δǫ τ 2 21 β 2 1 θτ = β 2 21 β 21 θτ + β 1 β 2δ ǫτ + By Lemma A.2 again, using 84 divided by τ, δ 2 ǫ τ 21 β 2 1 θ. 88 δ z τ = Z 1/2 Z Z 1/2 F H F 1 β1 θτ β1 θτ +δ ǫτ 1 β1 θτ < β 1 β + 1 θ1 2β = 1 by Lemma A.3. 1 β1 θ by 87, So, by 71, λ min Z 1/2 Z Z 1/2 > 1. 26

27 This implies that I+Z 1/2 Z Z 1/2 0, so Z + = Z+ Z = Z 1/2 I+Z 1/2 Z Z 1/2 Z 1/2 0. Therefore, Z + 1/2 exists. By defining E = Z + 1/2 X + Z + 1/2 1 θτi, M = Z 1/2 Z + 1/2, we can see that P = MEM 1. Applying 74 with these E S n and M and using Condition 2.2 and 48, we have Z + 1/2 X + Z + 1/2 1 θτi F symmp F = symm Z 1/2 X Z Z 1/2 Z 1/2 X ǫ Z 1/2 F by 85 Z 1/2 X Z Z 1/2 F + Z 1/2 X ǫ Z 1/2 F Z 1/2 X Z 1/2 F Z 1/2 Z Z 1/2 F + Z 1/2 X ǫ Z 1/2 F β 2 21 β 21 θτ + β 1 β 2 +1 δ 2 δ ǫ τ + ǫτ by 88 and β 2 1 θ τ [ ] = β 2 1 θ 2 +1 θ2β +21 β 2 δ 21 β 2 ǫ +δ 2 ǫ 1 θ τ < [β 2 1 θ θ 2 β 2 β +1 s 2 +t s+1 θ 2 s 2 +t s 2] 21 β 2 1 θ τ [ = β 2 1 θ θ 2 s s 2 +t s+1 θ 2 s 2 +t+s 2 2s ] s 2 +t 21 β 2 1 θ = 1 θτ [ 21 β 2 β 2 +2s s 2 +t s 2s ] s 2 +t s+t = 1 θτ 21 β 2β2 +t = 1 θτ 21 β 221 β2 α = α1 θτ. In addition, this implies that by 71, so λ min Z + 1/2 X + Z + 1/2 1 θτi α1 θτ, λ min Z + 1/2 X + Z + 1/2 α1 θτ +1 θτ = 1 α1 θτ > 0. Therefore, Z + 1/2 X + Z + 1/2 0, and X + 0 as well. In the following proofs, we frequently use the inequality M 1 M 2 F min M 1 M 2 F, M 1 F M 2, M 1,M 2 R n n. 89 See Horn and Johnson [15, Exercise 20 in Section 5.6]. In addition, note that the Frobenius norm E F for E R n n can be alternatively defined as E F = tr E T E

28 Proof. of Lemma 3.6 First, we prove 61. By 90, X 1/2 Z 0 1/2 F = tr Z 0 1/2 XZ 0 1/2 = = tr XZ 0 tr XZ 0 +tr X 0 Z since X 0 S+,Z n S+ n X Z 0 +X 0 Z nτ 0 1/2 2+ζ + α n 1/2. by Lemma 3.5 In a similar way, 62 can be proved. Next, we prove 63. X 1/2 F = X 1/2 Z 0 1/2 Z 0 1/2 F X 1/2 Z 0 1/2 F Z 0 1/2 by 89 Z 0 1/2 nτ 0 1/2 2+ζ + α n 1/2 by 61 proven above. In a similar way, we can also prove 64. Next, we prove 65. The equality is satisfied since σ 2 maxe = σ max E T E for any matrix E. Because X,Z Nα,τ, Z 1/2 XZ 1/2 τi Z 1/2 XZ 1/2 τi F ατ, Z 1/2 XZ 1/2 τ ατ, In a similar way, 66 can be proved. Z 1/2 XZ 1/2 τ +ατ = 1+ατ. Proof. of Lemma 3.7 We will use Lemma A.2 with X,y,Z = X,y,Z, X, y, Z = X+ ψx 0 X, y+ψy 0 y, Z+ψZ 0 Z, γ = α, and τ = τ. For a predictor direction X, y, Z, by 7 and 11, and since X is feasible, A i X = r pi, ψa i X 0 = ψb i r 0 pi = ψb i r pi, ψa i X = ψb i, for i = 1,...,m. Hence, A i X+ψX 0 X = 0, thus satisfying

29 Also, by 8 and 12 m y i A i + Z = R d, [ m ] ψ yi 0 A i +Z 0 [ m ] ψ y i A i + Z = ψc R 0 d = ψc R d, = ψc. Thus, y+ψy 0 y, Z+ψZ 0 Z satisfies 81. In addition, since X,Z Nα,τ, the prerequisite of Lemma A.2 is now verified. Then, using 33P, H in Lemma A.2 becomes T. Therefore, from Lemma A.2, using 83 and 84, we have the following inequalities, Hence, Z 1/2 X+ψX 0 XZ 1/2 F T F 1 α, τ Z 1/2 Z+ψZ 0 ZZ 1/2 F T F 1 α. Z 1/2 XZ 1/2 F T F 1 α +ψ Z1/2 X 0 XZ 1/2 F, = T F 1 α + T x F τ Z 1/2 ZZ 1/2 F T F 1 α +τψ Z 1/2 Z 0 ZZ 1/2 F = T F 1 α +τ T z F. Proof. of Lemma 3.8 First, we calculate bounds on T x, T z, and T in Lemma 3.7. By Lemma 3.6, we have T x F = ψ Z 1/2 X 0 XZ 1/2 F = ψ Z 1/2 X 0 1/2 X 0 1/2 X 0 XX 0 1/2 X 0 1/2 Z 1/2 F ψ Z 1/2 X 0 1/2 2 F X0 1/2 X 0 XX 0 1/2 F ψnτ 0 2+ζ +α/ nd 0 = nd 0 2+ζ +α/ nτ, T z F = ψ Z 1/2 Z 0 ZZ 1/2 F ψ Z 1/2 X 1/2 2 X 1/2 Z 0 1/2 2 F Z 0 1/2 Z 0 ZZ 0 1/2 F ψnτ 02+ζ +α/ n 1 ατ d 0 = nd 02+ζ +α/ n. 1 α 29

30 Similarly, T F ψ Z 1/2 X 0 XZ 1/2 F +ψ Z 1/2 XZ 0 ZZ 1/2 F + Z 1/2 X+ X ǫ Z 1/2 F = ψ Z 1/2 X 0 1/2 X 0 1/2 X 0 XX 0 1/2 X 0 1/2 Z 1/2 F +ψ Z 1/2 X 1/2 X 1/2 Z 0 1/2 Z 0 1/2 Z 0 ZZ 0 1/2 Z 0 1/2 X 1/2 X 1/2 Z 1/2 F + Z 1/2 X+ X ǫ Z 1/2 F ψ Z 1/2 X 0 1/2 2 F X0 1/2 X 0 XX 0 1/2 F +ψ Z 1/2 X 1/2 X 1/2 Z 0 1/2 2 F Z0 1/2 Z 0 ZZ 0 1/2 F X 1/2 Z 1/2 + n Z 1/2 XZ 1/2 + Z 1/2 X ǫ Z 1/2 F ψnτ 0 2+ζ +α/ nd 0 +ψnτ 0 2+ζ +α/ 1+α nd 0 1 α + n1+ατ +δ ǫ τ by definition of δ ǫ in 39 nτd 0 2+ζ +α/ n+nτd 0 2+ζ +α/ n [ 2nd0 2+ζ +α/ n τ + ] n1+α +δ ǫ τ 1 α [ 2nd0 2+ζ +α/ n τ 1 α 1+α 1 α + n1+ατ +δ ǫ τ + ] n1+α +δ x q by Condition 2.1. Using 68-70, we can rewrite the bounds on T x F, T z, and T as T x F C x τ, T z F C z, T F C 0 τ +δ x q. By Lemma 3.7 with the bounds on T x F and T F above, we have δ x T x F + T F 1 α C xτ + C 0τ +δ x q 1 α, 1 α δ x C x + C 0 τ since 1 α q > 0 by 42 and α q 1 α In a similar way, by Lemma 3.7 with the bounds on T z F and T F above, we have δ z τ T z F + T F 1 α C zτ + C 0τ +δ x q C z + C 0 τ + q 1 α 1 α 1 α δ x C z + C 0 1 α τ + q 1 α q C x + C 0 1 α τ by the bound of δ x above. By definitions of C x, C z, and C 0, since 0 < α < 1/2, C x + C 0 < C z + C 0, 91 1 α 1 α 30

31 so we have δ z C z + C 0 q τ + 1 α 1 α q 1 α C z + C 0 1 α q 1 α τ. Finally, by 67 and 91, 1 α δ C x + C 0 C z + C 0 1 α q 1 α 1 α + 1 α = C x + C 0 C z + C 0 1 α q 1 α 1 α + q1 α 1 α q 2 C x + C α 1 α 1 α q + q1 α 1 α q 2 C z + C 0 1 α 2, = 1 α 1 α q 2 C z + C 0 1 α and we obtain the final inequality. C x + C 0 τ 1 α 2 q C x + C 0 1 α q 1 α Proof. of Lemma 3.9 By Lemma 3.5, we have ρtrx + trz 2 + ζ + α/ nnτ 0 = 2 + ζ + α/ nnρ 2, so n λ i X+λ i Z 2+ζ +α/ nnρ. From 41, we have α/ n α 1/2, and since X Z = 0, ζ = Z X 0 +X Z 0 /X 0 Z 0 = trx +trz /nρ 1. This implies n X 1/2 2 F + Z1/2 2 F = λ i X+λ i Z 3+α/ nρn 3.5ρn. 92 In addition, we can see that X 0 X ρ and Z 0 Z ρ. By 92 and Lemma 3.6, Z 1/2 X 0 X Z 1/2 Z 1/2 2 F X 0 X 3.5ρ 2 n, 93 Z 1/2 XZ 0 Z Z 1/2 Z 1/2 X 1/2 X 1/2 Z 0 Z X 1/2 X 1/2 Z 1/2 Z 1/2 X 1/2 X 1/2 Z 1/2 X 1/2 2 F Z0 Z 1+α 3.5ρ 2 n 6.1ρ 2 n α 31

32 By 93, Lemma 3.6, and Lemma 3.7 with X, y, Z = X,y,Z, T x F ψ Z 1/2 X 0 X Z 1/2 F 3.5ψρ 2 n = 3.5nτ, 95 τ T z F τψ Z 1/2 X 1/2 X 1/2 Z 0 Z X 1/2 X 1/2 Z 1/2 F τψ Z 1/2 X 1/2 2 X 1/2 2 F Z0 Z F 3.5τψρ 2 n/0.5τ = 7nτ. 96 Similarly, by 93, 94, 65, and 39 T F ψ Z 1/2 X 0 X Z 1/2 F +ψ Z 1/2 XZ 0 Z Z 1/2 F + Z 1/2 XZ 1/2 F + Z 1/2 X ǫ Z 1/2 F 3.5ψρ 2 n+6.1ψρ 2 n+1.5nτ+δ ǫ τ 11.1nτ +δ x q by Condition 2.1. By the bound of δ x in Lemma 3.7, T F 11.1nτ + T F 1 α 1 α q T x F + T F 1 α q, 11.1nτ +q T x F. Furthermore, by the bound of T x F above, we have 1 α T F qnτ α q By Lemma 3.7 with 95 and 97, δ x T x F + T F 1 α 3.5nτ qnτ 1 α q 3.5nτ qnτ so, by the definition of h, 0.5 q q 0.5 q Similarly, by Lemma 3.7 with 96 and 97, nτ = since α < nτ, 0.5 q δ x hnτ. 98 δ z τ T z F + T F 1 α 7nτ qnτ 1 α q 7nτ qnτ 0.5 q q 0.5 q nτ = since α < q nτ, 32

33 so, by the definition of h, δ z h+3.5nτ. Therefore, by 67 and 98, δ 1 τ 2δ xδ z 1 τ 2hnτh+3.5nτ hh+3.5n2. 99 By Condition 2.1 and 98, δ ǫ q τ δ x q hnτ = qnh. 100 τ By the definition of θ in 43 and the fact that x+ y x+y, θ = = 1+ δ 2 ǫ + 4δ β α β α + 1+ δ ǫ + β α 1 1+ δ ǫ + β α 2 2 4δ β α + δ β α 1+ δ ǫ β α 1+ δ ǫ β α 1 n+ δ ǫ + β α δ β α. Finally, by the bounds on δ and δ ǫ in 99 and 100, we have θ 1 n+ qnh hh+3.5n + 2 β α β α n 1+ hq β α + 1 = 1 hh+3.5 wn. β α References [1] Farid Alizadeh, Jean-Pierre A. Haeberly, and Michael L. Overton. Primaldual interior-point methods for semidefinite programming. Technical report, manuscript presented at the Math. Programming Symposium, Ann Arbor, MI, [2] Farid Alizadeh, Jean-Pierre A. Haeberly, and Michael L. Overton. Primaldual interior-point methods for semidefinite programming: Convergence rates, stability and numerical results. SIAM J. Opt., 83: ,

34 [3] Christine Bachoc and Frank Vallentin. New upper bounds for kissing number from semidefinite programming. J. Amer. Math. Soc., 213: , [4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge Univ. Press, New York, NY, [5] George B. Dantzig and Yinyu Ye. A build-up interior-point method for linear programming: Affine scaling form. Technical report, Stanford Univ., [6] Etienne de Klerk. Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications. Kluwer Academic Publishers, Norwell, MA, [7] Etienne de Klerk, Dmitrii V. Pasechnik, and Renata Sotirov. On semidefinite programming relaxations of the traveling salesman problem. SIAM J. Opt., 194: , [8] Jack J. Dongarra, Cleve B. Moler, James R. Bunch, and G. W. Stewart. LINPACK Users Guide. SIAM, Philadelphia, PA, [9] Katsuki Fujisawa, Masakazu Kojima, and Kazuhide Nakata. Exploiting sparsity in primal-dual interior-point methods for semidefinite programming. Math. Prog., 791: , [10] Philip E. Gill, Gene H. Golub, Walter Murray, and Michael A. Saunders. Methods for modifying matrix factorizations. Math. Comp., 28126: , [11] William Hager. Condition estimates. SIAM J. Sci. Stat. Comput., 52: , [12] Christoph Helmberg, Franz Rendl, Robert J. Vanderbei, and Henry Wolkowicz. An interior-point method for semidefinite programming. SIAM J. Opt., 62: , [13] Dick Den Hertog, Cornelis Roos, and Tamás Terlaky. Adding and deleting constraints in the path-following method for LP. In Advances in Optimization and Approximation, volume 1, pages Springer, New York, [14] Nicholas Higham. A survey of condition number estimation for triangular matrices. SIAM Review, 94: , [15] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge Univ. Press, New York, NY, [16] Benjamin Jansen. Interior Point Techniques in Optimization. Kluwer Academic Publishers, Norwell, MA,

35 [17] Jun Ji, Florian A. Potra, and Rongqin Sheng. On the local convergence of a predictor-corrector method for semidefinite programming. SIAM J. Opt., 101: , [18] Jin Hyuk Jung, Dianne P. O Leary, and André L. Tits. Adaptive constraint reduction for training support vector machines. Elec. Trans. Numer. Anal., 31: , [19] Jin Hyuk Jung, Dianne P. O Leary, and André L. Tits. Adaptive constraint reduction for convex quadratic programming. Comput. Optim. Appl., 511: , [20] John A. Kaliski and Yinyu Ye. A decomposition variant of the potential reduction algorithm for linear programming. Manage. Sci., 39: , [21] Masakazu Kojima, Masayuki Shida, and Susumu Shindoh. Local convergence of predictor-corrector infeasible-interior-point algorithms for SDPs and SDLCPs. Math. Prog., 802: , [22] Masakazu Kojima, Masayuki Shida, and Susumu Shindoh. A predictorcorrector interior-point algorithm for the semidefinite linear complementarity problem using the Alizadeh Haeberly Overton search direction. SIAM J. Opt., 92: , [23] Masakazu Kojima, Susumu Shindoh, and Shinji Hara. Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM J. Opt., 71:86 125, [24] Renato D. C. Monteiro. Primal-dual path-following algorithms for semidefinite programming. SIAM J. Opt., 73: , [25] Renato D. C. Monteiro. Polynomial convergence of primal-dual algorithms for semidefinite programming based on Monteiro and Zhang family of directions. SIAM J. Opt., 83: , [26] RenatoD.C.MonteiroandYinZhang. Aunifiedanalysisforaclassoflongstep primal-dual path-following interior point algorithms for semidefinite programming. Math. Prog., 813: , [27] Yurii E. Nesterov and Michael J. Todd. Self-scaled barriers and interiorpoint methods for convex programming. Math. Op. Res., 221:1 42, [28] Yurii E. Nesterov and Michael J. Todd. Primal-dual interior-point methods for self-scaled cones. SIAM J. Opt., 82: , [29] Dianne P. O Leary. Estimating matrix condition numbers. SIAM J. Sci. Stat. Comput., 12: , [30] Christopher C. Paige and Michael A. Saunders. Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal., 124: ,

36 [31] Sungwoo Park. Matrix Reduction in Numerical Optimization. PhD thesis, Computer Science Department, Univ. of Maryland, College Park, MD, [32] Florian A. Potra and Rongqin Sheng. Superlinear convergence of a predictor-corrector method for semidefinite programming without shrinking central path neighborhood. Technical Report 91, Reports on Computational Mathematics, Department of Mathematics, Univ. of Iowa, [33] Florian A. Potra and Rongqin Sheng. Superlinear convergence of interiorpoint algorithms for semidefinite programming. J. Opt. Theory Appl., 991: , [34] Florian A. Potra and Rongqin Sheng. A superlinearly convergent primal-dual infeasible-interior-point algorithm for semidefinite programming. SIAM J. Opt., 84: , [35] Alexander Schrijver. A comparison of the Delsarte and Loviász bounds. IEEE Trans. Inform. Theory, IT-254: , [36] Pete G. W. Stewart. Efficient generation of random orthogonal matrices with an application to condition estimators. SIAM J. Numer. Anal., 173: , [37] André L. Tits, Pierre-Antoine Absil, and William P. Woessner. Constraint reduction for linear programs with many inequality constraints. SIAM J. Opt., 171: , [38] Kim-Chuan Toh, Michael J. Todd, and Reha H. Tütüncü. On the implementation and usage of SDPT3 - a MATLAB software package for semidefinite-quadratic-linear programming, version 4.0. In Miguel F. Anjos and Jean B. Lasserre, editors, Handbook on Semidefinite, Conic and Polynomial Optimization, volume 166, pages Springer, New York, [39] Kaoru Tone. An active-set strategy in an interior point method for linear programming. Math. Prog., 593: , [40] Jhacova Ashira Williams. The use of preconditioning for training support vector machines. Master s thesis, Applied Mathematics and Scientific Computing Program, Univ. of Maryland, College Park, MD, [41] Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, and Dianne P. O Leary. A constraint-reduced variant of Mehrotra s predictor-corrector algorithm. Comput. Optim. Appl., 513: , [42] Yin Zhang. On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming. SIAM J. Opt., 82: ,

A Constraint-Reduced Algorithm for Semidefinite Optimization Problems with Superlinear Convergence

A Constraint-Reduced Algorithm for Semidefinite Optimization Problems with Superlinear Convergence A Constraint-Reduced Algorithm for Semidefinite Optimization Problems with Superlinear Convergence Sungwoo Park ebruary 14, 2016 Abstract Constraint reduction is an essential method because the computational

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming. Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

SVM May 2007 DOE-PI Dianne P. O Leary c 2007

SVM May 2007 DOE-PI Dianne P. O Leary c 2007 SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

Introduction to Semidefinite Programs

Introduction to Semidefinite Programs Introduction to Semidefinite Programs Masakazu Kojima, Tokyo Institute of Technology Semidefinite Programming and Its Application January, 2006 Institute for Mathematical Sciences National University of

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

The Ongoing Development of CSDP

The Ongoing Development of CSDP The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING

PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING MASAKAZU MURAMATSU AND ROBERT J. VANDERBEI ABSTRACT. In this paper, we give an example of a semidefinite programming problem in which

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

2 The SDP problem and preliminary discussion

2 The SDP problem and preliminary discussion Int. J. Contemp. Math. Sciences, Vol. 6, 2011, no. 25, 1231-1236 Polynomial Convergence of Predictor-Corrector Algorithms for SDP Based on the KSH Family of Directions Feixiang Chen College of Mathematics

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Constraint Reduction for Linear Programs with Many Constraints

Constraint Reduction for Linear Programs with Many Constraints Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP

A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP Lu Li, and Kim-Chuan Toh April 6, 2009 Abstract Convex quadratic semidefinite programming (QSDP) has been

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, Dianne P. O Leary September 24, 2007 Abstract Consider linear programs in

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University, SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence

More information

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Journal of the Operations Research Society of Japan 2003, Vol. 46, No. 2, 164-177 2003 The Operations Research Society of Japan A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Masakazu

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Parallel implementation of primal-dual interior-point methods for semidefinite programs

Parallel implementation of primal-dual interior-point methods for semidefinite programs Parallel implementation of primal-dual interior-point methods for semidefinite programs Masakazu Kojima, Kazuhide Nakata Katsuki Fujisawa and Makoto Yamashita 3rd Annual McMaster Optimization Conference:

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

SDPARA : SemiDefinite Programming Algorithm PARAllel version

SDPARA : SemiDefinite Programming Algorithm PARAllel version Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

An Interior-Point Method for Approximate Positive Semidefinite Completions*

An Interior-Point Method for Approximate Positive Semidefinite Completions* Computational Optimization and Applications 9, 175 190 (1998) c 1998 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interior-Point Method for Approximate Positive Semidefinite Completions*

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Dimension reduction for semidefinite programming

Dimension reduction for semidefinite programming 1 / 22 Dimension reduction for semidefinite programming Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Correlative sparsity in primal-dual interior-point methods for LP, SDP and SOCP Kazuhiro Kobayashi, Sunyoung Kim and Masakazu Kojima

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 1101 1109 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Minimal condition number

More information

Convex Quadratic Approximation

Convex Quadratic Approximation Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,

More information

A study of search directions in primal-dual interior-point methods for semidefinite programming

A study of search directions in primal-dual interior-point methods for semidefinite programming A study of search directions in primal-dual interior-point methods for semidefinite programming M. J. Todd February 23, 1999 School of Operations Research and Industrial Engineering, Cornell University,

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin

Identifying Redundant Linear Constraints in Systems of Linear Matrix. Inequality Constraints. Shafiu Jibrin Identifying Redundant Linear Constraints in Systems of Linear Matrix Inequality Constraints Shafiu Jibrin (shafiu.jibrin@nau.edu) Department of Mathematics and Statistics Northern Arizona University, Flagstaff

More information

Implementation and Evaluation of SDPA 6.0 (SemiDefinite Programming Algorithm 6.0)

Implementation and Evaluation of SDPA 6.0 (SemiDefinite Programming Algorithm 6.0) Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems

Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems K. C. Toh, R. H. Tütüncü, and M. J. Todd May 26, 2006 Dedicated to Masakazu Kojima on the

More information

Croatian Operational Research Review (CRORR), Vol. 3, 2012

Croatian Operational Research Review (CRORR), Vol. 3, 2012 126 127 128 129 130 131 132 133 REFERENCES [BM03] S. Burer and R.D.C. Monteiro (2003), A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization, Mathematical Programming

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information