A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

Similar documents
Two theoretical results for sequential semidefinite programming

arxiv:math/ v1 [math.oc] 8 Mar 2005

A filter algorithm for nonlinear semidefinite programming

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

1. Introduction. Consider the following parameterized optimization problem:

Absolute value equations

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

A priori bounds on the condition numbers in interior-point methods

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Constrained Optimization Theory

Numerical Optimization

AN EQUIVALENCY CONDITION OF NONSINGULARITY IN NONLINEAR SEMIDEFINITE PROGRAMMING

Properties of Matrices and Operations on Matrices

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

Convex Optimization M2

Projection methods to solve SDP

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Constrained optimization: direct methods (cont.)

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

CHARACTERIZATIONS OF LIPSCHITZIAN STABILITY

First-order optimality conditions for mathematical programs with second-order cone complementarity constraints

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Local Convergence of Sequential Convex Programming for Nonconvex Optimization

Implications of the Constant Rank Constraint Qualification

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A new primal-dual path-following method for convex quadratic programming

SUCCESSIVE LINEARIZATION METHODS FOR NONLINEAR SEMIDEFINITE PROGRAMS 1. Preprint 252 August 2003

Sharpening the Karush-John optimality conditions

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

SENSITIVITY ANALYSIS OF GENERALIZED EQUATIONS

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Constrained Optimization

c 2000 Society for Industrial and Applied Mathematics

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Largest dual ellipsoids inscribed in dual cones

5. Duality. Lagrangian

Radius Theorems for Monotone Mappings

The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Some new facts about sequential quadratic programming methods employing second derivatives

The Q Method for Symmetric Cone Programmin

DUALITY, OPTIMALITY CONDITIONS AND PERTURBATION ANALYSIS

ECE 275A Homework #3 Solutions

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Perturbation analysis of a class of conic programming problems under Jacobian uniqueness conditions 1

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

minimize x subject to (x 2)(x 4) u,

10-725/36-725: Convex Optimization Prerequisite Topics

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

Optimality Conditions for Constrained Optimization

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

Optimality, Duality, Complementarity for Constrained Optimization

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

1 Computing with constraints

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

This manuscript is for review purposes only.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Lecture 7: Positive Semidefinite Matrices

Introduction and Math Preliminaries

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

The effect of calmness on the solution set of systems of nonlinear equations

A convergence result for an Outer Approximation Scheme

CONSTRAINED NONLINEAR PROGRAMMING

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Solving generalized semi-infinite programs by reduction to simpler problems.

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Chap 3. Linear Algebra

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

1. Introduction. We consider mathematical programs with equilibrium constraints in the form of complementarity constraints:

MS&E 318 (CME 338) Large-Scale Numerical Optimization

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

Lectures on Parametric Optimization: An Introduction

Lecture 3. Optimization Problems and Iterative Algorithms

Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method

A stabilized SQP method: superlinear convergence

A STABILIZED SQP METHOD: GLOBAL CONVERGENCE

Symmetric Matrices and Eigendecomposition

Maths for Signals and Systems Linear Algebra in Engineering

Transcription:

Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm RODRIGO GARCÉS 1, WALTER GÓMEZ 2 and FLORIAN JARRE 3 1 EuroAmerica S.A., Av. Apoquindo 3885, Las Condes, Santiago, Chile, 2 Department of Mathematical Engineering, Universidad de La Frontera, Av. Francisco Salazar, 01145, Temuco, Chile 3 Institut für Mathematik, Universität Düsseldorf, Universitätsstraße 1, D 40225 Düsseldorf, Germany E-mails: rgarces@euroamerica.cl / wgomez@ufro.cl / jarre@opt.uni-duesseldorf.de Abstract. In this short note a sensitivity result for quadratic semidefinite programming is presented under a weak form of second order sufficient condition. Based on this result, also the local convergence of a sequential quadratic semidefinite programming algorithm extends to this weak second order sufficient condition. Mathematical subject classification: 90C22, 90C30, 90C31, 90C55. Key words: semidefinite programming, second order sufficient condition, sequential quadratic programming, quadratic semidefinite programming, sensitivity, convergence. 1 Introduction A sequential semidefinite programming (SSDP) algorithm for solving nonlinear semidefinite programs was proposed in [5, 7]. It is a generalization of the well-known sequential quadratic programming (SQP) method and considers linear semidefinite subproblems that can be solved using standard interior point packages. Note that linear SDP relaxations with a convex quadratic objective #CAM-362/11. Received: 27/IV/11. Accepted: 24/V/11. *Corresponding author

206 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS function can be transformed to equivalent linear SDP subproblems. However, as shown in [4], under standard assumptions, local superlinear convergence is possible only when the iterates are defined by SDP relaxations with a nonconvex quadratic objective function. Since this class of problems is no longer equivalent to the linear semidefinite programming case we refer to the algorithm in this note as Sequential Quadratic Semidefinite Programming (SQSDP) method. In the papers [5, 7] a proof is given showing local quadratic convergence of the SSDP algorithm to a local minimizer assuming a strong second order sufficient condition. This condition ensures, in particular, that the quadratic SDP subproblems close to the local minimizers are convex, and therefore reducible to the linear SDP case. However, as pointed out in [4], there are examples of perfectly well-conditioned nonlinear SDP problems that do not satisfy the strong second order sufficient condition used in [5, 7]. These examples satisfy a weaker second order condition [10], that considers explicitly the curvature of the semidefinite cone. In this short note we study the sensitivity of quadratic semidefinite problems (the subproblems of SQSDP), using the weaker second order condition. Based on this sensitivity result, the fast local convergence of the SQSDP method can also be established under the weaker assumption in [10]; in this case the quadratic SDP subproblems may be nonconvex. The sensitivity results presented in this paper were used in [8] for the study of a local self-concordance property for certain nonconvex quadratic semidefinite programming problems. 2 Notation and preliminaries By S m we denote the linear space of m m real symmetric matrices. The space R m n is equipped with the inner product A B := trace(a T B) = m n A i j B i j. i=1 j=1 The corresponding norm is the Frobenius norm defined by A F = A A. The negative semidefinite order for A, B S m is defined in the standard

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 207 form, that is, A B iff A B is a negative semidefinite matrix. The order relations, and are defined similarly. By S m + we denote the set of positive semidefinite matrices. The following simple Lemma is used in the sequel. Lemma 1 (See [7]). Let Y, S S m. (a) If Y, S 0 then Y S + SY = 0 Y S = 0. (1) (b) If Y + S 0 and Y S + SY = 0 then Y, S 0. (c) If Y + S 0 and Y S + SY = 0 then for any Ẏ, Ṡ S m, Y Ṡ + Ẏ S = 0 Y Ṡ + Ẏ S + ṠY + SẎ = 0. (2) Moreover, Y, S have representations of the form [ ] [ ] Y 1 0 Y = U U T 0 0, S = U U T, 0 0 0 S 2 where U is an m m orthogonal matrix, Y 1 0 is a (m r) (m r) diagonal matrix and S 2 0 is a r r diagonal matrix, and any matrices Ẏ, Ṡ S m satisfying (2) are of the form [ ] [ ] Ẏ1 Ẏ 3 Ẏ = U Ẏ3 T U T 0 Ṡ 3, Ṡ = U U T, 0 where Ṡ T 3 Y 1 Ṡ 3 + Ẏ 3 S 2 = 0. (3) Ṡ 2 Proof. For (a), (c) see [7]. (b) By contradiction we assume that λ is a negative eigenvalue of S and u a corresponding eigenvector. The equality Y S + SY = 0 implies that u T Y (Su) + (Su) T Y u = 0, u T Y (λu) + (λu) T Y u = 0, λ(u T Y u + u T Y u) = 0, u T Y u = 0, since λ < 0.

208 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS Now using the fact that Y + S 0, we have 0 < u T (Y + S)u = u T Y u + u T Su = λu T u = λ u 2 < 0 which is a contradiction. Hence, S 0. The same arguments give us Y 0. Remark 1. Due to (3) and the positive definiteness of the diagonal matrices Y 1 and S 2, it follows that (Ẏ 3 ) i j (Ṡ 3 ) i j < 0 whenever (Ẏ 3 ) i j = 0. Hence, if, in addition to (3), also Ẏ 3, Ṡ 3 = 0 holds true, then Ẏ 3 = Ṡ 3 = 0. In the sequel we refer to the set of symmetric and strict complementary matrices C = { (Y, S) S m S m Y S + SY = 0, Y + S 0 }. (4) As a consequence of Lemma 1(b), the set C is (not connected, in general, but) contained in S m + Sm +. Moreover, Lemma 1(c) implies that the rank of the matrices Y and S is locally constant on C. 2.1 Nonlinear semidefinite programs Given a vector b R n and a matrix-valued function G : R n S m, we consider problems of the following form: minimize b T x subject to G(x) 0. (5) x R n Here, the function G is at least C 3 -differentiable. For simplicity of presentation, we have chosen a simple form of problem (5). All statements about (5) in this paper can be modified so that they apply to additional nonlinear equality and inequality constraints and to nonlinear objective functions. The notation and assumptions in this subsection are similar to the ones used in [8]. The Lagrangian L : R n S m R of (5) is defined as follows: Its gradient with respect to x is given by and its Hessian by L(x, Y ) := b T x + G(x) Y. (6) g(x, Y ) := x L(x, Y ) = b + x (G(x) Y ) (7) H(x, Y ) := x 2 L(x, Y ) = 2 x (G(x) Y ). (8)

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 209 Assumptions. (A1) We assume that ˉx is a local minimizer of (5) that satisfies the Mangasarian-Fromovitz constraint qualification, i.e., there exists a vector x = 0 such that G( ˉx) + DG( ˉx)[ x] 0, where by definition DG(x)[s] = n i=1 s i D xi G(x). Assumption (A1) implies that the first-order optimality condition is satisfied, i.e., there exist matrices Ȳ, ˉS S m such that G( ˉx) + ˉS = 0, g( ˉx, Ȳ ) = 0, Ȳ ˉS = 0, Ȳ, ˉS 0. (9) A triple ( ˉx, Ȳ, ˉS) satisfying (9), will be called a stationary point of (5). Due to Lemma 1(a) the third equation in (9) can be substituted by Ȳ ˉS + ˉSȲ = 0. This reformulation does not change the set of stationary points, but it reduces the underlying system of equations (via a symmetrization of Y S) in the variables (x, Y, S), such that it has now the same number of equations and variables. This is a useful step in order to apply the implicit function theorem. (A2) We also assume that Ȳ is unique and that ˉS, Ȳ are strictly complementary, i.e. (Ȳ, ˉS) C. According to Lemma 1(c), there exists a unitary matrix U = [U 1, U 2 ] that simultaneously diagonalizes Ȳ and ˉS. Here, U 2 has r := rank( ˉS) columns and U 1 has m r columns. Moreover the first m r diagonal entries of U T ˉSU are zero, and the last r diagonal entries of U T ȲU are zero. In particular, we obtain U T 1 G( ˉx)U 1 = 0 and U T 2 ȲU 2 = 0. (10) A vector h R n is called a critical direction at ˉx if b T h = 0 and it is the limit of feasible directions of (5), i.e. if there exist h k R n and ɛ k > 0 with lim k h k = h, lim k ɛ k = 0, and G( ˉx + ɛ k h k ) 0 for all k. As shown

210 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS in [1] the cone of critical directions at a strictly complementary local solution ˉx is given by C( ˉx) := {h U T 1 DG( ˉx)[h]U 1 = 0}. (11) In the following we state second order sufficient conditions due to [10] that are weaker than the ones used in [5, 7]. (A3) We further assume that ˉx, Ȳ satisfies the second order sufficient condition: h T ( 2 x L( ˉx, Ȳ ) + H ( ˉx, Ȳ ))h > 0 h C( ˉx) \ {0} (12) Here H is a nonnegative matrix related to the curvature of the semidefinite cone in G( ˉx) along direction Ȳ (see [10]) and is given by its matrix entries H i, j := 2Ȳ G i ( ˉx)G( ˉx) G j ( ˉx), where G i ( ˉx) := DG( ˉx)[e i ] with e i denoting the i-th unit vector. Furthermore, G( ˉx) denotes the Moore-Penrose pseudo-inverse of G( ˉx), i.e. G( ˉx) = λ 1 i u i u T i, where λ i are the nonzero eigenvalues of G( ˉx) and u i corresponding orthonormal eigenvectors. Remark 2. The Moore-Penrose inverse M is a continuous function of M, when the perturbations of M do not change its rank, see [3]. The curvature term can be rewritten as follows: h T H ( ˉx, Ȳ )h = i, j h i h j ( 2Ȳ G i ( ˉx)G( ˉx) G j ( ˉx)), = 2Ȳ i, j h i h j G i ( ˉx)G( ˉx) G j ( ˉx), (13) n = 2Ȳ h i G i ( ˉx)G( ˉx) i=1 n h j G j ( ˉx), j=1 = 2Ȳ DG( ˉx)[h]G( ˉx) DG( ˉx)[h].

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 211 Note that in the particular case where G is affine (i.e. G(x) = A(x) + C, with a linear map A and C S m ), the curvature term is given by h T H ( ˉx, Ȳ )h := 2Ȳ (A(h)(A( ˉx) + C) A(h)). (14) The following very simple example of [4] shows that the classical second order sufficient condition is generally too strong in the case of semidefinite constraints, since it does not exploit curvature of the non-polyhedral semidefinite cone. min x 1 (x 2 1) 2 x R 2 s.t. 1 0 x 1 0 1 x 2 x 1 x 2 1 0 (15) It is a trivial task to check that the constraint G(x) 0 is equivalent to the inequality x1 2 + x 2 2 1, such that ˉx = (0, 1)T is the global minimizer of the problem. The first order optimality conditions (9) are satisfied at ˉx with associated multiplier Ȳ = 0 0 0 0 2 2 0 2 2. The strict complementarity condition also holds true, since 1 0 0 Ȳ G( ˉx) = 0 3 1 0 0 1 3 The Hessian of the Lagrangian at ( ˉx, Ȳ ) for this problem can be calculated as [ ] xx 2 2 0 L( ˉx, Ȳ ) =. 0 2 It is negative definite, and the stronger second order condition is not satisfied.

212 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS In order to calculate the curvature term in (12) let us consider the orthogonal matrix U, which simultaneously diagonalizes Ȳ, G( ˉx) 1 0 0 1 U = 0 2 1 2. 0 1 2 1 2 The Moore-Penrose pseudoinverse matrix at ˉx is then given by G( ˉx) = 1 4 0 0 0 1 1, (16) 4 0 1 1 and the matrix associated to the curvature becomes [ ] 4 0 H ( ˉx, Ȳ ) =. 0 0 Finally, every h R 2 such that f ( ˉx) T h = 0, has the form h = (h 1, 0) T with h 1 R. Therefore, the weaker second order sufficient condition holds, i.e., h t ( xx 2 L( ˉx, Ȳ ) + H ( ˉx, Ȳ ))h = 2h 2 1 > 0 h C( ˉx) \ {0}. 3 Sensitivity result Let us now consider the following quadratic semidefinite programming problem min x R n s.t. b T x + 1 2 x T H x A(x) + C 0. (17) Here, A : R n S m is a linear function, b R n, and C, H S m. The data to this problem is D := [A, b, C, H]. (18) In the next theorem, we present a sensitivity result for the solutions of (17), when the data D is changed to D + D where D := [ A, b, C, H] (19)

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 213 is a sufficiently small perturbation. The triple ( ˉx, Ȳ, ˉS) R n S m S m is a stationary point for (17), if A( ˉx) + C + ˉS = 0, b + H ˉx + A (Ȳ ) = 0, (20) Ȳ ˉS + ˉSȲ = 0, Ȳ, ˉS 0. Remark 3. Below, we consider tiny perturbations D such that there is an associated strictly complementary solution (x, Y, S)( D) of (20). For such x there exists U 1 = U 1 (x) and an associated cone of critical directions C(x). The basis U 1 (x) generally is not continuous with respect to x. However, the above characterization (11) of C( ˉx) under strict complementarity can be stated using any basis of the orthogonal space of G( ˉx). Since such basis can be locally parameterized in a smooth way over the set C in (4) it follows that locally, the set C(x) forms a closed point to set mapping. The following is a slight generalization of Theorem 1 in [7]. Theorem 1. Let the point ( ˉx, Ȳ, ˉS) be a stationary point satisfying the assumptions (A1)-(A3) for the problem (17) with data D. Then, for all sufficiently small perturbations D as in (19), there exists a locally unique stationary point ( ˉx(D + D), Ȳ (D + D), ˉS(D + D)) of the perturbed program (17) with data D + D. Moreover, the point ( ˉx(D), Ȳ (D), ˉS(D)) is a differentiable function of the perturbation (19), and for D = 0, we have ( ˉx(D), Ȳ (D), ˉS(D)) = ( ˉx, Ȳ, ˉS). The derivative D D ( ˉx(D), Ȳ (D), ˉS(D)) of ( ˉx(D), Ȳ (D), ˉS(D)) with respect to D evaluated at ( ˉx, Ȳ, ˉS) is characterized by the directional derivatives (ẋ, Ẏ, Ṡ) := D D ( ˉx(D), Ȳ (D), ˉS(D))[ D] for any D. Here, (ẋ, Ẏ, Ṡ) is the unique solution of the system of linear equations, A(ẋ) + Ṡ = C A( ˉx), H ẋ + A (Ẏ ) = b H ˉx A (Ȳ ), (21) Ȳ Ṡ + Ẏ ˉS + ṠȲ + ˉSẎ = 0,

214 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS for the unknowns ẋ R n, Ẏ, Ṡ S m. Finally, the second-order sufficient condition holds at ( ˉx(D), Ȳ (D)) whenever D is sufficiently small. This theorem is related to other sensitivity results for semidefinite programming problems (see, for instance, [2, 6, 11]). Local Lipschitz properties under strict complementarity can be found in [9]. In [10] the directional derivative ẋ is given as solution of a quadratic problem. Proof. Following the outline in [7] this proof is based on the application of the implicit function theorem to the system of equations (20). In order to apply this result we show that the matrix of partial derivatives of system (20) with respect to the variables (x, Y, S) is regular. To this end it suffices to prove that the system A(ẋ) + Ṡ = 0, H ẋ + A (Ẏ ) = 0, (22) Ȳ Ṡ + Ẏ ˉS + ṠȲ + ˉSẎ = 0, only has the trivial solution ẋ = 0, Ẏ = Ṡ = 0. Let (ẋ, Ẏ, Ṡ) R n S m S m be a solution of (22). Since Ȳ and ˉS are strictly complementary, it follows from part (c) of Lemma 1, the existence of an orthonormal matrix U such that: Ȳ = UỸU T, ˉS = U SU T (23) where Ỹ = [ Ȳ1 0 0 0 ], S = [ 0 0 0 ˉS 2 ], (24) with Ȳ 1, ˉS 2 diagonal and positive definite. Furthermore, the matrices Ẏ, Ṡ S m satisfying (22) fulfill the relations Ẏ = U ˇYU T, Ṡ = U ŠU T (25) where ˇY = [ Ẏ1 Ẏ 3 Ẏ T 3 0 ], Š = [ 0 Ṡ 3 Ṡ T 3 Ṡ 2 ], and Ẏ 3 ˉS 2 + Ȳ 1 Ṡ 3 = 0. (26)

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 215 Using the decomposition given in (10), the first equation of (22), and (25) we have U T 1 A(ẋ)U T 1 = U T 1 U ŠU T U 1 = 0. It follows that ẋ C( ˉx). Now using (14), (23-26), the first equation in (20), and the first equation in (22), we obtain ẋ T H ( ˉx, Ȳ )ẋ = 2Ȳ A(ẋ)(A( ˉx) + C) A(ẋ), = 2Ȳ Ṡ( ˉS) Ṡ, = 2Ỹ Š( S) Š, since ˉS 2 0, S = = 2Ẏ 3 Ṡ 3. [ 0 0 0 ˉS 1 2 By the same way, using the first two relations in (25) and the first two equations of (22), one readily verifies that ẋ T H ẋ = H ẋ, ẋ = A (Ẏ ), ẋ = Ẏ A(ẋ) = Ẏ Ṡ = 2Ẏ 3 Ṡ 3. ]. Consequently ẋ T (H + H ( ˉx, Ȳ ))ẋ = 0. (27) This implies that ẋ = 0, since ẋ C( ˉx). Using Remark 1 it follows also that Ẏ 3 = Ṡ 3 = 0. By the first equation of (22), we obtain Ṡ = A(ẋ) = A(0) = 0. (28) Thus, it only remains to show that Ẏ = 0. In view of (26) we have [ ] Ẏ1 0 ˇY =. (29) 0 0 Now suppose that Ẏ 1 = 0. Since Ȳ 1 0, it is clear that there exists some ˉτ > 0 such that Ȳ 1 + τẏ 1 0 τ (0, ˉτ]. If we define Ȳ τ := Ȳ + τẏ it follows that Ȳ τ = U(Ỹ + τ ˇY )U T 0 Ȳ τ = Ȳ, τ (0, ˉτ].

216 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS Moreover, using (20), (22), and (28), one readily verifies that ( ˉx, Ȳ τ, ˉS) also satisfies (20) for all τ (0, ˉτ]. This contradicts the assumption that ( ˉx, Ȳ, ˉS) is a locally unique stationary point. Hence Ẏ 1 = 0 and by (29), Ẏ = 0. We can now apply the implicit function theorem to the system A(x) + C + S = 0, H x + b + A (Y ) = 0, (30) Y S + SY = 0. As we have just seen, the linearization of (30) at the point ( ˉx, Ȳ, ˉS) is nonsingular. Therefore the system (30) has a differentiable and locally unique solution ( ˉx( D), Ȳ ( D), ˉS( D)). By the continuity of Ȳ ( D), ˉS( D) with respect to D it follows that for D sufficiently small, Ȳ ( D) + ˉS( D) 0, i.e. (Ȳ ( D) + ˉS( D)) C. Consequently, by part (b) of Lemma 1 we have Ȳ ( D), ˉS( D) 0. This implies that the local solutions of the system (30) are actually stationary points. Note that the dimension of the image space of ˉS( D) is constant for all D sufficiently small. According to Remark 2 it holds that ˉS( D) ˉS when D 0. Finally we prove that the second-order sufficient condition is invariant under small perturbations D of the problem data D. We just need to show that there exists ˉε > 0 such that for all D with D ˉε it holds: h T ((H+ H)+H ( ˉx( D), Ȳ ( D)))h > 0 h C( ˉx( D))\{0}. (31) Since C( ˉx( D))/{0} is a cone, it suffices to consider unitary vectors, i.e. h = 1. We assume by contradiction that there exists ε k 0, { D k } with D k ε k, and {h k } with h k C( ˉx( D k )) \ {0} such that h T k ((H + H k) + H ( ˉx( D), Ȳ ( D k )))h k 0. (32) We may assume that h k converges to h with h = 1, when k. Since D k 0, we obtain from the already mentioned convergence ˉS( D) ˉS and simple continuity arguments that: 0 < h T Hh + h T H ( ˉx, Ȳ )h 0. (33) The left inequality of this contradiction follows from the second order sufficient condition since h C( ˉx(0)) \ {0} due to Remark 3.

RODRIGO GARCÉS, WALTER GÓMEZ and FLORIAN JARRE 217 4 Conclusions The sensitivity result of Theorem 1 was used in [7] to establish local quadratic convergence of the SSP method. By extending this result to the weaker form of second order sufficient condition, the analysis in [7] can be applied in a straightforward way to this more general class of nonlinear semidefinite programs. In fact, the analysis in [7] only used local Lipschitz continuity of the solution with respect to small changes of the data, which is obviously implied by the differentiability established in Theorem 1. Acknowledgements. The authors thank to an anonymous referee for the helpful and constructive suggestions. This work was supported by Conicyt Chile under the grants Fondecyt Nr. 1050696 and Nr. 7070326. REFERENCES [1] F. Bonnans and H.C Ramírez, Strong regularity of semidefinite programming problems. Informe Técnico, DIM-CMM, Universidad de Chile, Santiago, Departamento de Ingeniería Matemática, No CMM-B-05/06-137 (2005). [2] F. Bonnans and A. Shapiro, Perturbation Analysis of Optimization Problems. Springer Series in Operations Research (2000). [3] A. Bjork, Numerical methods for least squares problems. SIAM, Society for Industrial and Applied Mathematics, Philadelphia (1996). [4] M. Diehl, F. Jarre and C.H. Vogelbusch, Loss of Superlinear Convergence for an SQP-type method with Conic Constraints. SIAM Journal on Optimization, (2006), 1201 1210. [5] B. Fares, D. Noll and P. Apkarian, Robust control via sequential semidefinite programming. SIAM J. Control Optim., 40 (2002), 1791 1820. [6] R.W. Freund and F. Jarre, A sensitivity result for semidefinite programs. Oper. Res. Lett., 32(2004), 126 132. [7] R.W. Freund, F. Jarre and C. Vogelbusch, Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling. Math. Program., 109(2-3) (2007), 581 611. [8] R. Garcés, W. Gómez and F. Jarre, A self-concordance property for nonconvex semidefinite programming. Math. Meth. Oper. Res., 74 (2011), 77 92.

218 A SENSITIVITY RESULT FOR QUADRATIC SEMIDEFINITE PROGRAMS [9] M.V. Nayakkankuppam and M.L. Overton, Conditioning of semidefinite programs. Math. Program., 85 (1999), 525 540. [10] A. Shapiro, First and second order analysis of nonlinear semidefinite programs. Math. Program., 77 (1997), 301 320. [11] J.F. Sturm and S. Zhang, On sensitivity of central solutions in semidefinite programming. Math. Program., 90 (2001), 205 227.