A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP

Size: px
Start display at page:

Download "A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP"

Transcription

1 A polynomial-time inexact primal-dual infeasible path-following algorithm for convex quadratic SDP Lu Li, and Kim-Chuan Toh April 6, 2009 Abstract Convex quadratic semidefinite programming (QSDP) has been widely applied in solving engineering and scientific problems such as nearest correlation problems and nearest Euclidean distance matrix problems. In this paper, we study an inexact primal-dual infeasible path-following algorithm for QSDP problems of the form: min X { 2X Q(X) + C X : A(X) = b, X 0}, where Q is a self-adjoint positive semidefinite linear operator on S n, b R m, and A is a linear map from S n to R m. This algorithm is designed for the purpose of using an iterative solver to compute an approximate search direction at each iteration. It does not require feasibility to be maintained even if some iterates happened to be feasible. By imposing mild conditions on the inexactness of the computed directions, we show that the algorithm can find an ɛ-solution in O(n 2 ln(/ɛ)) iterations. eywords: semidefinite programming, semidefinite least squares, infeasible interior point method, inexact search direction, polynomial complexity Introduction We consider the following linearly constrained convex quadratic semidefinite programming (QSDP) problem defined in the vector space of n n real symmetric matrices Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 7543 (lilu@nus.edu.sg) Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 7543 (mattohc@nus.edu.sg); and Singapore-MIT Alliance, 4 Engineering Drive 3, Singapore 7576.

2 S n endowed with the inner product A, B = A B = Tr(AB): (P ) min f(x) := X Q(X) + C X 2 s.t. A i X = b i, i =,, m X 0, (.) where Q : S n S n is a given self-adjoint positive semidefinite linear operator. Here, A i, C S n, b R m are given data and X 0 (X 0) indicates that X is in S n + (S n ++). The set S n + (S n ++) denotes the set of positive semidefinite (definite) matrices in S n. In addition, we assume that {A i i =,..., m} is linearly independent. The dual problem of (P ) is given as follows: (D) max X Q(X) + 2 bt y s.t. m i= y ia i + Z = f(x) = Q(X) + C Z 0. (.2) The problem (P ) includes linear SDP as a special case when Q = 0. It also includes the following linearly constrained convex quadratic programming (LCCQP) [8]: { } min 2 xt Qx + c T x : Ax = b, x R n +, where Q is a given positive semidefinite matrix. A recent application of QSDP is the nearest correlation matrix problem [3]. QSDP also arises in nearest Euclidean distance matrix problems [] and other matrix least square problems [9]. Many problems in metric embeddings, covariance estimations, and molecular conformations can also be formulated as QSDP, see for example [5] and [3]. We use the following notation and terminology. We let n = n 2. The notation vec(x) denotes the vector obtained by stacing the columns of X one by one from the first to the last. The inverse of vec is denoted as mat. The matrix representation of Q in the standard basis of S n is the unique matrix Q S+ n that satisfies vec(q(x)) = Q(vecX) for all X S n. Also, let A T = [veca veca 2 veca m ], we write the matrix representations of A i X (i =,, m) and m i= y ia i as A(vecX) and A T y respectively. Note that A has full row ran and hence AA T is non-singular. The pseudo inverse of A is defined as A + = A T (AA T ). We use to denote the Frobenius norm for a matrix or Euclidean norm for a vector, and 2 to denote the spectral norm of a matrix. For an n n matrix M, we ordered its eigenvalues λ i (M) as follows: Reλ (M)... Reλ n (M). The perturbed Karush-Kuhn-Tucer (KKT) optimality conditions for the prob- 2

3 lems (P ) and (D) are as follows: vec f(x) + A T y + vecz A(vecX) b XZ = 0 0 νi, X, Z 0, (.3) where ν 0 is a given parameter that is to be driven to zero explicitly. Note that when ν = 0, (.3) gives the optimal conditions for (P ) and (D). As the system (.3) has more independent equations than unnowns due to the fact that XZ is usually nonsymmetric, the last equation XZ = νi is usually symmetrized to H P (XZ) = νi, where for a given positive definite matrix P, H P : R n n S n is the following symmetrization operator [20] defined by H P (M) := 2 [P MP + (P MP ) T ]. In [20], P is chosen to be in the class C(X, Z) := {P S n ++ P XP and P ZP commutes}. This class includes the common choices: P = Z /2, P = X /2, and P = W /2 where W is the Nesterov-Todd (NT) scaling matrix satisfying W ZW = X [4]. It has been shown in [20] that for X, Z S n ++ and P C(X, Z), H P (XZ) = νi if and only if XZ = νi. In this paper, we choose P to be the NT scaling matrix rather than any P C(X, Z) as considered in [20]. The main reasons for considering only the NT scaling matrix are that it simplifies the complexity analysis and also gives the best iteration complexity. In addition, it is employed in practical computations since it has certain desirable properties that allow one to design efficient preconditioners for the augmented system (3.4a) for computing search directions; see [6] for details. Primal-dual path-following interior-point methods (IPM) are nown to be the most efficient class of methods for solving linear SDP problems, both in computation [5] and in theoretical complexity [, 20]. The earliest extension of standard primal-dual path-following algorithms to solve QSDP was done in [] where for each iteration, a linear system of dimension m+p must be solved directly, say by Cholesy decomposition. Here, p is the ran of Q, and p = n if Q is nonsingular. For an ordinary destop PC, this direct approach can only solve small size problems with n less than a hundred due to the prohibitive computational cost and huge memory requirement when n is large. In recent applications such as the nearest Euclidean distance matrix completion problems arising from molecular conformation or senor networ localization, there is an increasing demand for methods that can handle QSDP where n or m is large. This motivated us to pursue the idea of solving the large linear system inexactly by an iterative solver to overcome the bottlenec mentioned in the last paragraph. Infeasible primal-dual path-following algorithms using inexact search directions have been 3

4 investigated extensively in LP, linear SDP, and more generally monotone linear complementarity problems; see [2], [7], [2] and [8]. For linear SDP, an inexact infeasible interior-point algorithm was introduced by Kojima et al. in [6] wherein the algorithm only allowed inexactness in the component corresponding to the complementarity equation (the third equation in (.3)). Subsequently, Zhou and Toh [9] developed an infeasible inexact path-following algorithm which allowed inexactness in the primal and dual feasibilities, and complementarity equations. Furthermore, primal and dual feasibilities need not be maintained even if some iterates happen to lie in the feasible region. In [9], it is proved that the algorithm needs at most O(n 2 ln(/ɛ)) iterations to compute an ɛ-optimal solution. Our interest in this paper is to extend the inexact primal-dual infeasible pathfollowing algorithm in [9] to the case of QSDP. We will focus on establishing the polynomial iteration complexity of the algorithm. In particular, we show that the algorithm needs at most O(n 2 ln(/ɛ)) iterations to compute an ɛ-optimal solution for (P ) and (D). This complexity result is the same as that established for a linear SDP in [9]. The complexity analysis of our proposed algorithm is similar to the case of a linear SDP in [9]. But there is a major difference in that we always have to consider the effect of the quadratic term in the objective function of QSDP. In particular, Lemma 3.4 shows that the complexity bound we obtained is dependent on Q 2. We hope that the theoretical framewor we developed here for QSDP can lead to further development of inexact primal-dual infeasible path-following methods for broader classes of SDP problems such as those with an objective function f(x) in (P ) that is convex with a Lipschitz continuous gradient but not necessarily quadratic. We should point out that the numerical implementation and evaluation of our proposed inexact algorithm for QSDP has been thoroughly studied in [6] and [7]. The rest of this paper is organized as follows. In the next section, we define the infeasible central path and its corresponding neighborhood. In addition, we also establish some ey lemmas that are needed for subsequent complexity analysis. In section 3, we discuss the computation of inexact search directions. We also present our inexact primal-dual infeasible path-following algorithm and establish a polynomial complexity result for this algorithm. In section 4, we give detailed proofs on the polynomial complexity result. Throughout the paper, we made the following assumption. Assumption. Problems (P) and (D) are strictly feasible. We say that (P) and (D) are (strictly) feasible if there exists (X, y, Z) satisfying the linear constraints in (.3) and X, Z 0 (X, Z 0). 4

5 2 An infeasible central path and its neighborhood Let (X 0, y 0, Z 0 ) be an initial point such that X 0 = Z 0 = ρi, (2.) where ρ > 0 is a given constant. Let L = Q 2. Note that L is a Lipschitz constant for the gradient of the objective function in (P ), i.e., f(x) f(y ) = Q(X) Q(Y ) L X Y. (2.2) For given positive constants γ p γ d such that γ d + Lγ p (0, ), the constant ρ is chosen to be sufficiently large so that for some solution (X, y, Z ) to (P ) and (D), the following conditions hold: We define ( γ p )X 0 X 0, ( (γ d + Lγ p ))Z 0 Z 0, (2.3) Tr(X ) + Tr(Z ) nρ. (2.4) µ 0 = X 0 Z 0 /n = ρ 2, (2.5) R p 0 = A(vecX 0 ) b, (2.6) vecr d 0 = vec f(x 0 ) + A T y 0 + vecz 0. (2.7) For θ, ν (0, ], the following infeasible KKT system has a unique solution under Assumption : vec f(x) + A T y + vecz A(vecX) b H P (XZ) Define the infeasible central path as: = θvecr d 0 θr p 0 νµ 0 I, X, Z 0. (2.8) P = { (θ, ν, X, y, Z) R ++ R ++ S n ++ R m S n ++ such that (2.8) holds }. The primary idea of a primal-dual infeasible path-following algorithm is to generate a sequence of points (X, y, Z ) such that (θ, ν, X, y, Z ) P and (X, y, Z ) converges to a solution of (P ) and (D) when θ and ν are driven to 0. In practice of course, the points are never exactly on the central path P but lie in some neighborhood of P. In our inexact primal-dual infeasible path-following algorithm, we consider the 5

6 following neighborhood of P. Choose a constant γ (0, ) in addition to γ p and γ d, we define the neighborhood to be: (θ, ν, X, y, Z) (0, ] (0, ] S++ n R m S++ n : θ ν, vec f(x) + A T y + vecz = θ(vecr0 d + ξ d ), ξ d γ d ρ, N = A(vecX) b = θ(r p. 0 + ξ p ), A + ξ p γ p ρ, ( γ)νµ 0 λ min (XZ) λ max (XZ) ( + γ)νµ 0 Let θ 0 = ν 0 =. It follows from (2.) that (θ 0, ν 0, X 0, y 0, Z 0 ) N. It is easy to show that if (θ, ν, X, y, Z) N and P C(X, Z), then H P (XZ) = P XZP is symmetric and has the same set of eigenvalues as XZ. From the definition of N, it is easy to see that we have ( γ)νµ 0 I H P (XZ) ( + γ)νµ 0 I (2.9) ( γ)νµ 0 X Z/n ( + γ)νµ 0. (2.0) Remar. To maintain that γ d + Lγ p <, γ p must be less than ( γ d )/L, which could be close to 0 if L is large. Thus, L = Q 2 restricts the flexibility of the inexact neighborhood N. Next, we present two lemmas that are needed for the iteration complexity analysis in section 3. Lemma 2.. For any r p and r d satisfying r d γ d ρ and A + r p γ p ρ, there exists ( X, ỹ, Z) that satisfies the following conditions: Proof. Let vec f( X) + A T ỹ + vec Z = vecr d 0 + r d, (2.) A(vec X) b = R p 0 + r p, (2.2) ( γ p )ρi X ( + γ p )ρi, (2.3) [ (γ d + Lγ p )]ρi Z [ + (γ d + Lγ p )]ρi. (2.4) vec X = vecx 0 + A + r p, ỹ = y 0, vec Z = vecz 0 + r d + Q(vec X) Q(vecX 0 ), (2.) (2.3) are readily shown. To show (2.4), we only need to establish the following inequality: r d + Q(vec X) Q(vecX 0 ) r d + Q(vec X vecx 0 ) (γ d + Lγ p )ρ. 6

7 Lemma 2.2. Given the initial conditions (2.), (2.3) and (2.4), for any (θ, ν, X, y, Z) N, we have 6νρn 6νρn θtr(x), θtr(z). (γ d + Lγ p ) γ p Proof. This proof is adapted from that for Lemma 2 in [9]. For (θ, ν, X, y, Z) N, we have vec f(x) + A T y + vecz = θ(vecr d 0 + r d ), r d γ d ρ, (2.5) A(vecX) b = θ(r p 0 + r p ), A + r p γ p ρ. (2.6) By Lemma 2., there exists ( X, ỹ, Z) satisfies conditions (2.) (2.4). Also, a solution (X, y, Z ) to (P ) and (D) satisfies the following equations: Let A(vecX ) b = 0, vec f(x ) + A T y + vecz = 0. X = ( θ)x + θ X X, ŷ = ( θ)y + θỹ y, Ẑ = ( θ)z + θ Z Z. Then we have A(vec X) = 0, A T (ŷ) + vecẑ = Q vec X. Hence X, Ẑ = X, Q( X). Together with the fact that Q is positive semidefinite, we have ( θ)x + θ X, Z + X, ( θ)z + θ Z = ( θ)x + θ X, ( θ)z + θ Z + X, Z X, Q( X) ( θ)x + θ X, ( θ)z + θ Z + X, Z. (2.7) By using (2.4), (2.0), (2.3), (2.4), (2.7), and the fact that X Z = 0, X Z, X Z 0, we have that θρ[( (γ d + Lγ p ))I X + ( γ p )I Z] θ( Z X + X Z) ( θ)x + θ X, Z + X, ( θ)z + θ Z ( θ)x + θ X, ( θ)z + θ Z + X, Z θ( θ)(x Z + X Z ) + θ 2 X Z + X Z θ( θ)( + γ d + Lγ p )ρ(x I + I Z ) + θ 2 ( + γ p )( + γ d + Lγ p )ρ 2 n + ( + γ)νµ 0 n 6νρ 2 n. From here, the required results follow. 7

8 Remar. {(X, y, Z) (θ, ν, X, y, Z) N } is bounded if θ = ν, since from Lemma 2. we have X Tr(X) O(ρn) and Z Tr(Z) O(ρn). Suppose we generate a sequence {(θ, ν, X, y, Z )} N such that ν θ,, and = ν 0 ν ν + 0. If ν 0 as, then any limit point of the sequence {X, y, Z } is a solution of (P ) and (D). In particular, if θ = ν, then the sequence {X, Z } is also bounded. 3 An inexact infeasible interior point algorithm Let η, η 2 (0, ] be given constants such that η η 2. Given a current iterate (θ, ν, X, y, Z ) N, we want to construct a new iterate which remains in N with respect to smaller θ and ν. To this end, we consider the search direction ( X, y, Z ) determined by the following linear system: Q A T I A 0 0 vec X y = η (vecr d + rd ) η (R p + rp ), (3.) E 0 F vec Z vecr c + rc where for P = W /2 (W is the NT scaling matrix satisfying W Z W = X ), ( ( ) E = P 2 P Z + P Z P ), F = P 2 P X + P X P vecr d = vec f(x ) + A T y + vecz, R c = ( η 2)ν µ 0 I H P (X Z ). R p = A(vecX ) b The notation A B refers to the standard Kronecer product. The last equation of (3.) is equivalent to H P (X Z + X Z + X Z ) = ( η 2 )νµ 0 I + matr c. (3.2) The search direction ( X, y, Z ) is just an inexact Newton direction for the perturbed KKT system (2.8). On the right hand side of (3.), R d, Rp and Rc are the residual components for infeasibilities and complementarity, whereas the vectors r d, rp, rc are the residual components for the inexactness in the computed search direction. Let {σ } = be a given sequence in (0, ] such that σ := =0 σ <. We require the residual components in the inexactness in (3.) to satisfy the following accuracy conditions: A + r p γ pρθ σ, r d γ d ρθ σ, r c 0.5( η 2 )γν µ 0. (3.3) Remar. In practice, we can solve (3.) by the following procedure: 8

9 . Compute y and X from the following augmented system: [ Q F E ] [ ] [ A T vec X η (vecr d = + rd ) F ] vecrc A 0 y η (R p + rp ) (3.4a) with the residual vectors r d and rp satisfying the conditions in (3.3). 2. Compute Z from vec Z = F E vec X + F vecrc. (3.4b) Here, we can see that Z is obtained directly from (3.2) with r c = 0. Thus, rc can be ignored in the system (3.). The dimension of the augmented system (3.4a) is n 2 + m, which is typically a large number even for n = 00. The computational cost and memory requirement for solving (3.4a) by a direct solver is about O((n 2 +m) 3 ) and O((n 2 + m) 2 ) respectively, which are prohibitively expensive for large scale problems. An iterative solver would not require the storage or manipulation of the full coefficient matrix. However, the disadvantage of using an iterative solver is the demand of good preconditioners to accelerate its convergence. In practice, constructing cheap and effective preconditioners could be the most challenging tas in the implementation of an inexact interior-point algorithm for solving QSDP; see [6] for details. After computing the search direction in (3.), we consider the following trial iterate to determine the new iterate: (θ (α), ν (α), X (α), y (α), Z (α)) (3.5) = (( αη )θ, ( αη 2 )ν, X + α X, y + α y, Z + α Z ), α [0, ]. To find the new iterate, we need to choose an appropriate step length α to eep the new iterate in N. The precise choice of α will be discussed shortly. Before that, we present our inexact primal-dual infeasible path-following algorithm. Algorithm IPC. Let θ 0 = ν 0 =. Choose parameters η, η 2 (0, ] with η η 2, γ p, γ d (0, ) such that γ p γ d and γ d + Lγ p <. Pic a sequence {σ } = in (0, ] such that σ := =0 σ <. Choose (X 0, y 0, Z 0 ) satisfying (2.), (2.3), (2.4). Note that (θ 0, ν 0, X 0, y 0, Z 0 ) N. For = 0,,.... Terminate when ν < ɛ. 2. Find an inexact search direction ( X, y, Z ) from the linear system (3.). 3. Let α [0, ] be chosen appropriately so that (θ +, ν +, X +, y +, Z + ) := (θ (α ), ν (α ), X (α ), y (α ), Z (α )) N. 9

10 Let α 0, α,..., α be the step lengths that have already been determined in the previous iterations. For reasons that will become apparent shortly, we assume that the step lengths α i, i = 0,...,, are contained in the interval I := [ 0, min{, /(η ( + σ))} ]. (3.6) Let the primal and dual infeasibilities associated with (θ (α), ν (α), X (α), y (α), Z (α)) be R p (α) = A(vecX (α)) b, vecr d (α) = vec f(x (α)) + A T y (α) + vecz (α). We will show that R p (α) and Rd (α) satisfy the first two conditions in N when α is restricted to be in the interval I given in (3.6). Lemma 3.. Suppose the step lengths α i associated with the iterates (θ i, ν i, X i, y i, Z i ) are restricted to be in the interval I for i = 0,...,. Then we have R p (α) = θ (α)(r p 0 + ξ p (α)) (3.7) R d (α) = θ (α)(vecr d 0 + ξ d (α)) (3.8) where A + ξ p (α) γ pρ, ξ d (α) γ d ρ, α I. Proof. Note that R p (α) has exactly the same form as in the inexact interior-point algorithm considered in [9] for a linear SDP. By the same procedure in [9], we have where R p (α) = θ (α)(r p 0 + ξ p (α)), ξ p (α) = ξp αη r p ( αη )θ = α i η r p i ( α i η )θ αη r p i ( αη )θ. (3.9) i=0 The quantity R d (α) is different from its counterpart in a linear SDP as it contains an extra term coming from the quadratic term in the objective function. Thus, we need to investigate the details. Given that the current iterate belongs to N, we have vecr d (α) = vec f(x (α)) + A T y (α) + vecz (α) = vec f(x ) + A T y + vecz + α[ Q(vec X ) + A T y + vec Z ] = vecr d αη (vecr d + r d ) = ( αη )θ (vecr d 0 + ξ d ) αη r d 0

11 where = ( αη )θ (vecr0 d + ξ d αη ) r d ( αη )θ = θ(α)(vecr d 0 + ξ d (α)), ξ d (α) = ξ d αη r d = ( αη )θ i=0 From (3.9) and (3.0), we see that since α i η (+ σ) α i η ( α i η )θ i r d i A + ξ p (α) γ pρ, ξ d (α) γ d ρ, α I. αη ( αη )θ r d. (3.0) for i =,...,, we have Let { ᾱ = min, η ( + σ), 0.5( η 2)γν µ } 0. (3.) H P ( X Z ) Next, we chec the last condition in N. The following lemma generalizes the result of Lemma 4.2 in [20]. Lemma 3.2. For (θ, ν, X, y, Z ) N and X, Z satisfying (3.), we have (a) H P (X (α)z (α)) = ( α)h P (X Z ) + α( η 2 )ν µ 0 I + α matr c + α 2 H P ( X Z ) (b) ( γ)ν (α)µ 0 λ i (X (α)z (α)) ( + γ)ν (α)µ 0 α [0, ᾱ ]. Proof. (a) The proof of part (a) is quite standard and uses equation (3.2). (b) The proof uses the fact that for any matrix B R n n, the real part of its spectrum is contained in the interval given by [λ min (B +B T )/2, λ max (B +B T )/2]. In particular, for any nonsingular matrix P, we have Reλ i (B) = Reλ i (P BP ) [λ min (H P (B)), λ max (H P (B))] i =,..., n. Using the above fact, we have for any i =,..., n, λ i (X (α)z (α)) ( γ)ν (α)µ 0 λ min (H P (X (α)z (α))) ( γ)ν (α)µ 0 ( α)( γ)ν µ 0 + α( η 2 )ν µ 0 α r c α 2 H P ( X Z ) ( γ)ν (α)µ 0 = αγ( η 2 )ν µ 0 α r c α 2 H P ( X Z )

12 0.5α( η 2 )γν µ 0 α 2 H P ( X Z ) 0 for α [0, ᾱ]. The proof that λ i (X (α)z (α)) ( + γ)ν (α)µ 0 for all α [0, ᾱ] is similar, and we shall omit it. Lemma 3.3. Under the conditions in Lemmas 3. and 3.2, for any α [0, ᾱ ], we have (θ(α), ν(α), X(α), y(α), Z(α)) N. Proof. The result follows from Lemmas 3. and 3.2. Lemma 3.4. Suppose the conditions in (2.), (2.3) and (2.4) hold. Then H P ( X Z ) = O() ( (γ d + Lγ p )) 2 n2 ν µ 0. (3.2) The proof of Lemma 3.4 is non-trivial and we devote the next section to its proof. We are now ready to present the main result of this paper, the polynomial iteration complexity of Algorithm IPC. Theorem. Let ɛ > 0 be a given tolerance. Suppose the conditions in (2.), (2.3) and (2.4) hold. At each iteration of Algorithm IPC, set the step length α = ᾱ. Then ν ɛ for = O(n 2 ln(/ɛ)). Proof. With Lemma 3.4, the proof of Theorem is straightforward, see [9]. 4 Proof of Lemma 3.4 For a given (θ, ν, X, y, Z ) N, the purpose of Lemma 3.4 is to establish an upper bound for H P ( X Z ). Throughout this section, we shall consider only the NT direction, where P = W /2, with W S++ n satisfying W Z W = X. It is easy to verify that W = P 2 = Z /2 (Z /2 X Z /2 ) /2 Z /2 = X /2 (X /2 Z X /2 ) /2 X /2, (4.) and consequently ( λ max (W ) λ max (X /2 Z X /2 ) )λ /2 max (X ), (4.2) ( ) λ min (W ) λ min (Z /2 X Z /2 ) /2 λ min (Z ). (4.3) 2

13 To facilitate our analysis, we introduce the following notation: X = P X P, X = P X P, Ê = E (P Ẑ = P Z P ; Ẑ = P Z P ; P ) = 2 (I Ẑ + Ẑ I), F = F (P P ) = 2 (I X + X I). From the fact that W /2 Z W /2 = W /2 X W /2, we have Ẑ = X, Ê = F. (4.4) It is readily shown that F, Ê, F Ê S n ++. Let the eigenvalue decompositions of X and Ẑ be: X = Ẑ = Q Λ Q T, (4.5) where Q T Q = I, Λ = diag(λ,..., λn ), and λ... λn. From (2.9), we have ( γ)ν µ 0 (λ ) 2 (λ n ) 2 ( + γ)ν µ 0. (4.6) Let Ŝ := F Ê T = 4(Ẑ X + X Ẑ + I X Ẑ + X ) Ẑ I. (4.7) The eigenvalues of Ŝ are given by { } Λ(Ŝ) = 4 (λ i λ j + λ j λ i + λ i λ i + λ j λ j ) : i, j =, 2,, n. From (4.5) and (4.6), we have, ( γ)ν µ 0 I Ŝ ( + γ)ν µ 0 I, (4.8) and Ŝ 2 ( + γ)ν µ 0, Ŝ 2 ( γ)ν µ 0. (4.9) Now we state a few lemmas, which lead to the proof of Lemma 3.4. Lemma 4.. For any M R n n, (P (P P )vecm 2 P )vecm 2 ( γ)ν µ 0 Z 2 M 2, ( γ)ν µ 0 X 2 M 2. 3

14 Proof. First we note that Z /2 X Z /2, Z /2 Z X /2, and X Z are similar, and λ min (X Z ) ( γ)ν µ 0. From (4.2), (4.3), we have λ max (W ) By (4.0), we have X ( γ)ν µ 0, λ min (W ) ( γ)ν µ 0. (4.0) Z (P P )vecm 2 P P 2 2 M 2 Similarly, by (4.0), we have (P λ 2 max(w ) M 2 P )vecm 2 P P 2 2 M 2 λ 2 max(w ) M 2 ( γ)ν µ 0 Z 2 M 2. ( γ)ν µ 0 X 2 M 2. Lemma 4.2. vec X 2 + vec Ẑ X Ẑ = Ŝ /2 (vecr c + r) c 2, H P ( X Z ) ( vec 2 X ) 2 + vec Ẑ 2. Proof. The last equation of (3.) can be rewritten as Ê (vec X ) + F (vec Ẑ) = vecr c + r c. (4.) Multiplying (4.) by Ŝ /2 from the left, we have vec X + vec Ẑ = Ŝ /2 (vecr c + r). c From here, the first equation in the lemma follows. For the second inequality, by Lemma 4.6 of [0], we have H P ( X Z ) = 2 P X Z P + P Z X P P X Z P = X Ẑ X Ẑ ( vec 2 X ) 2 + vec Ẑ 2. 4

15 Lemma 4.3. We have Proof. From (3.3) and (4.9), we have Ŝ /2 (vecr c + r c ) 2 = O(nν µ 0 ). Ŝ /2 r c 2 Ŝ 2 r c [( η 2)γν µ 0 ] 2 = [( η 2)γ] 2 ν µ 0. (4.2) ( γ)ν µ 0 4( γ) Observe that from (4.5), Thus vecr c = (Q Q )vec(( η 2 )ν µ 0 I Λ 2 ). Ŝ /2 vecr c 2 Ŝ 2 vecr c 2 n ) 2 (( η 2 )ν µ 0 (λ i ) 2 ( γ)ν µ 0 nν µ 0 γ i= ( (γ η 2 ) 2 + 4γ ), by (4.6). (4.3) The required result follows from (4.2) and (4.3). This completes the proof. In the rest of our analysis, we introduce an auxiliary point ( X, ỹ, Z ) whose existence is ensured by Lemma 2.. From Lemma 3., we have the following equations at the th iteration: vec f(x ) + A T y + vecz = θ (vecr d 0 + ξ d ), ξ d γ d ρ, (4.4) A(vecX ) b = θ (vecr p 0 + ξ p ), A+ ξ p γ pρ. (4.5) Thus by Lemma 2., there exists ( X, ỹ, Z ) such that Lemma 4.4. Let vec f( X ) + A T ỹ + vec Z = vecr d 0 + ξ d (4.6) A(vec X ) b = R p 0 + ξ p (4.7) ( γ p )ρi X ( + γ p )ρi, (4.8) [ (γ d + Lγ p )]ρi Z [ + (γ d + Lγ p )]ρi. (4.9) X = X X θ ( X X ), Z = Z Z θ ( Z Z ). 5

16 The following equations hold: X, Z = X, QX, (4.20) vec( X + η θ ( X X )) + η A + r p, vec( Z + η θ ( Z Z )) + η r d = vec( X + η θ ( X X )) + η A + r p, Q vec( X + η θ ( X X )).(4.2) Proof. By (4.4) (4.7) and the fact that we have AvecX b = 0, vec f(x ) + A T y + vecz = 0, AvecX = 0 A T (y y θ (ỹ y )) + vec(z ) = Q vec(x ), which implies (4.20). Next, by (3.), and (4.4) (4.7), we have ( A vec( X + η θ ( X ) X )) + η A + r p = 0 ) A ( y T + η θ (ỹ y ) + vec( Z + η θ ( Z Z )) + η r d = Q vec( X + η θ ( X X )), which implies (4.2). Let T = T 2 = T 3 = ( vec X 2 + vec Ẑ 2 ) /2 (4.22) ( (P P )vec( X X ) 2 + (P P )vec( Z ) /2(4.23) Z ) 2 ( ) /2 (P P )A + r p 2 + (P P )rd 2 (4.24) T 4 = (P P )Q (A+ r p ). (4.25) Then we have the following lemma. Lemma 4.5. T 2η (θ T 2 + T 3 + T 4 ) + T 5, where T 5 = Ŝ /2 (vecr c + r) c 2 + 2ηθ 2 2 X X, Z ) Z + 2η 2 (θ T 2 T 3 + T3 2 + θ T 2 T 4. 6

17 Proof. By (4.2), we have that X, Ẑ = X, Z = η θ [ X, Z Z + X X, Z ] + η [ vec X, r d + A + r p, vec Z ] + ηθ 2 [ vec( X X ), r d + A + r p, vec( Z Z ) ] + η 2 A + r p, rd + ηθ 2 2 X X, Z Z η A + r p, Q vec( X + η θ ( X X )) X + η θ ( X X ), Q ( X + η θ ( X X )). Also, we have the following inequalities: X, Z Z + X X, Z = X, P ( Z Z )P + P ( X X )P, Ẑ T T 2 vec X, r d + A + r p, vec Z T T 3 vec( X X ), r d + A + r p, vec( Z Z ) T 2 T 3 A + r p, rd T 2 3 A + r p, Q vec( X X ) T 2 T 4 A + r p, Q vec X T T 4 X + η θ ( X X ), Q ( X + η θ ( X X )) 0. In the above, we used the Cauchy-Schwartz inequality and the fact that ac + bd a2 + b 2 c 2 + d 2 for a, b, c, d 0. By Lemma 4.2, and the above inequalities, we have T 2 = Ŝ /2 (vecr c + r) c 2 2 X, Ẑ 2 (η θ T T 2 + η T T 3 + η 2θ T 2 T 3 + η 2T 23 + η 2θ ) T 2 T 4 + η T T 4 + Ŝ /2 (vecr c + r) c 2 + 2ηθ 2 2 X X, Z Z = 2η T (θ T 2 + T 3 + T 4 ) + T 5. The quadratic function t 2 2η (θ T 2 + T 3 + T 4 )t T 5 has a unique positive root at t + = η (θ T 2 + T 3 + T 4 ) + η(θ 2 T 2 + T 3 + T 4 ) 2 + T 5, and it is positive for t > t +, hence we must have T t + 2η (θ T 2 + T 3 + T 4 ) + T5. 7

18 Lemma 4.6. We have T 2 3 = O() ( (γ d + Lγ p )) 2 n2 ν µ 0. Proof. By (3.3), we have A + r p θ γ p ρ, r d θ γ d ρ. (4.26) By Lemma 4. and the fact that M Tr(M) for M S n +, we have (P P )A + r p 2 ( γ)ν µ 0 A + r p 2 Z 2 = γ 2 pρ 2 ( γ)ν µ 0 θ 2 Z 2 γ 2 pρ 2 ( γ)ν µ 0 36 ( γ p ) 2 n2 ν 2 ρ 2 = γ 2 pρ 2 ( γ)ν µ 0 θ 2 [Tr(Z )] 2 O() ( γ p ) 2 n2 ν µ 0 by Lemma 2.2. Similarly, we have (P P )rd 2 ( γ)ν µ 0 r d 2 X 2 = γ 2 d ρ2 ( γ)ν µ 0 θ 2 X 2 γ 2 d ρ2 ( γ)ν µ 0 θ 2 [Tr(X )] 2 γd 2ρ2 36 ( γ)ν µ 0 ( (γ d + Lγ p )) 2 n2 νρ 2 2 = O() ( (γ d + Lγ p )) 2 n2 ν µ 0 by Lemma 2.2. From here, the required result follows. Lemma 4.7. Under the conditions (2.), (2.3) and (2.4), X X, Z Z 4nµ0. Proof. See Lemma [9]. Lemma 4.8. Under the conditions (2.), (2.3), and (2.4), θ 2 T 2 2 = O(n 2 ν µ 0 ). 8

19 Proof. By the fact that 0 X X ( + γ p )ρi, we have (P P )vec( X X ) = P ( X X )P Tr(P ( X X )P ) = W, X X = (Z /2 X Z /2 ) /2, Z /2 ( X X )Z /2 by (4.) λ max ((Z /2 X Z /2 ) /2 ) Z, X X ( γ)ν µ 0 Z, X X. Similarly, from 0 Z Z ( + γ d + Lγ p )ρi, we have (P P )vec( Z Z ) = P ( Z Z )P ) = W, Z Z Tr(P ( Z Z )P = (X /2 Z X /2 ) /2, X /2 ( Z Z )X /2 λ max ((X /2 Z X /2 ) /2 ) X, Z Z Therefore, we have θ 2 T 2 2 θ 2 ( γ)ν µ 0 X, Z Z. ( (P P )vec( X X ) + (P by (4.) P )vec( Z ) 2 Z ) θ 2 ( γ)ν µ 0 ( Z, X X + X, Z Z ) 2. From (4.20) and the facts that X Z = 0, X Z, X Z, X Z, Z X 0, we have θ X X, Z + θ X, Z Z = X Z X Z X Z + X Z + θ ( X, Z Z + X X, Z ) + θ 2 X X, Z Z X X θ ( X X ), Q(X X θ ( X X )) X Z + θ ( X Z + X Z ) + θ 2 X Z ( + γ)ν µ 0 n + θ ( + γ d + Lγ p )ρ(x I + I Z ) + θ 2 ( + γ p )( + γ d + Lγ p )ρ 2 n 8ν µ 0 n. Thus θ 2 T 2 2 = O(n 2 ν µ 0 ). 9

20 Lemma 4.9. T 2 4 = O() ( (γ d + Lγ p )) 2 n2 ν µ 0. Proof. By Lemma 4., we have T 2 4 = ( γ)ν µ 0 X 2 Q(A + r p ) 2 ( γ)ν µ 0 X 2 L 2 A + r p 2 γ 2 pρ 2 L 2 ( γ)ν µ 0 θ 2 X 2 γ 2 pl 2 ( γ)ν O() ( (γ d + Lγ p )) 2 n2 ν 2 ρ 2, by Lemma 2.2 O() ( (γ d + Lγ p )) 2 n2 ν µ 0. The following proof directly leads to Lemma 3.4. Lemma 4.0. T 2 = O() ( (γ d + Lγ p )) 2 n2 ν µ 0. Proof. From Lemma 4.5 to Lemma 4.9 and the fact that (a+b) 2 2a 2 +2b 2, we have T 2 (2η (θ T 2 + T 3 + T 4 ) + T 5 ) 2 8(θ T 2 + T 3 + T 4 ) 2 + 2T 5 8(θ T 2 + T 3 + T 4 ) Ŝ /2 (vecr c + r) c 2 + 4θ 2 X X, Z Z +4θ T 2 T 3 + 4T θ T 2 T 4 O() ( (γ d + Lγ p )) 2 n2 ν µ 0 + O(nν µ 0 ). Thus, by Lemma 4.2 and Lemma 4.0, we have H P ( X Z ) 2 T 2 = O() ( (γ d + Lγ p )) 2 n2 ν µ 0. 20

21 References [] A. Y. Alfaih, A. Khandani, and H. Wolowicz. Solving Euclidean distance matrix completion problems via semidefinite programming, Comp. Optim. Appl., 2: 3-30, 999. [2] R. W. Freund, F. Jarre, and S. Mizuno. Convergence of a class of inexact interior-point algorithms for linear programs, Math. Oper. Res., 24(): 50-7, 999. [3] N. J. Higham. Computing the nearest correlation matrix - a problem from finance IMA J. Numer. Anal., 22: , [4] R. A. Horn and C. R. Johnson. Topics in matrix analysis, Cambridge University Press, 99. [5] N. Krisloc, J. Lang, J. Varah, D. K. Pai, and H.-P. Seidel. Local compliance estimation via positive semidefinite constrained least squares, IEEE Trans. Robotics, 20, 007-0, [6] M. Kojima, M. Shida and S. Shindoh. Search directions in the SDP and the monotone SDLCP: generalization and inexact computation, Math. Program., 85: 5-80, 999. [7] J. Korza. Convergence analysis of inexact infeasible-interior-point algorithms for solving linear programming problems, SIAM J. Optim., (): 33-48, [8] Z. Lu, R.D.C. Monteiro and J.W. O Neal. An iterative solver-based infeasible primaldual path-following algorithm for convex quadratic programming, SIAM J. Optim., 7: , [9] J. Marlic. A dual approach to semidefinite least-squares problems, SIAM J. Matrix Anal. Appl., 26(): , [0] R. C. Monteiro and Y. Zhang. A unified analysis for a class of long-step primal-dual path-following interior-point algorithms for semidefinite programming, Math. Program., 8(3): , 998. [] R. C. Monteiro. Primal-dual path-following algorithms for semidefinite programming, SIAM J. Optim., 7(3): , 997. [2] S. Mizuno and F. Jarre. Global and polynomial-time convergence of an infeasible interior- point algorithm using inexact computation, Math. Program., 84: , 999. [3] L. Subhash. Euclidean Distance Matrix Analysis (EDMA): Estimation of mean form and mean form difference, Math. Geology, 25(5): , 993. [4] M. J. Todd, K. C. Toh and R. H. Tütüncü. On the Nesterov-Todd direction in semidefinite programming, SIAM J. Optim., 8: ,

22 [5] R. H. Tütüncü., K. C. Toh, and M. J. Todd. Solving semidefinite-quadratic-linear programs using SDPT3, Math. Program. Series. B, 95: 89-27, [6] K. C. Toh. An inexact primal-dual path-following algorithm for convex quadratic SDP, Math. Program., 2: , [7] K. C. Toh, R. H. Tütüncü., and M. J. Todd.Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems, Pacific J. Optim., 3:35-64, [8] K. C. Toh and M. Kojima, Solving some large scale semidefinite programs via the conjugate residual method, SIAM J. Optim., 2:669-69, [9] G. Zhou and K. C. Toh. Polynomiality of an inexact infeasible interior point algorithm for semidefinite programming, Math. Program., 99(2): , [20] Y. Zhang. On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming, SIAM J. Optim., 8: ,

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl (29) 141: 231 247 DOI 1.17/s1957-8-95-5 Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization G. Al-Jeiroudi J. Gondzio Published online: 25 December

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

2 The SDP problem and preliminary discussion

2 The SDP problem and preliminary discussion Int. J. Contemp. Math. Sciences, Vol. 6, 2011, no. 25, 1231-1236 Polynomial Convergence of Predictor-Corrector Algorithms for SDP Based on the KSH Family of Directions Feixiang Chen College of Mathematics

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP Kaifeng Jiang, Defeng Sun, and Kim-Chuan Toh Abstract The accelerated proximal gradient (APG) method, first

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

On the Moreau-Yosida regularization of the vector k-norm related functions

On the Moreau-Yosida regularization of the vector k-norm related functions On the Moreau-Yosida regularization of the vector k-norm related functions Bin Wu, Chao Ding, Defeng Sun and Kim-Chuan Toh This version: March 08, 2011 Abstract In this paper, we conduct a thorough study

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

An Introduction to Correlation Stress Testing

An Introduction to Correlation Stress Testing An Introduction to Correlation Stress Testing Defeng Sun Department of Mathematics and Risk Management Institute National University of Singapore This is based on a joint work with GAO Yan at NUS March

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

A projection algorithm for strictly monotone linear complementarity problems.

A projection algorithm for strictly monotone linear complementarity problems. A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP

An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP Kaifeng Jiang, Defeng Sun, and Kim-Chuan Toh March 3, 2012 Abstract The accelerated proximal gradient (APG)

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Nearest Correlation Matrix

Nearest Correlation Matrix Nearest Correlation Matrix The NAG Library has a range of functionality in the area of computing the nearest correlation matrix. In this article we take a look at nearest correlation matrix problems, giving

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method Advances in Operations Research Volume 01, Article ID 357954, 15 pages doi:10.1155/01/357954 Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

A Distributed Newton Method for Network Utility Maximization

A Distributed Newton Method for Network Utility Maximization A Distributed Newton Method for Networ Utility Maximization Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie Abstract Most existing wor uses dual decomposition and subgradient methods to solve Networ Utility

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Solving Obstacle Problems by Using a New Interior Point Algorithm. Abstract

Solving Obstacle Problems by Using a New Interior Point Algorithm. Abstract Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer

More information

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties André L. Tits Andreas Wächter Sasan Bahtiari Thomas J. Urban Craig T. Lawrence ISR Technical

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10589-005-3911-0 On Two Measures of Problem Instance Complexity and their Correlation with the Performance

More information

A Multiple-Cut Analytic Center Cutting Plane Method for Semidefinite Feasibility Problems

A Multiple-Cut Analytic Center Cutting Plane Method for Semidefinite Feasibility Problems A Multiple-Cut Analytic Center Cutting Plane Method for Semidefinite Feasibility Problems Kim-Chuan Toh, Gongyun Zhao, and Jie Sun Abstract. We consider the problem of finding a point in a nonempty bounded

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

INEXACT INTERIOR-POINT METHODS FOR LARGE SCALE LINEAR AND CONVEX QUADRATIC SEMIDEFINITE PROGRAMMING LI LU. (B.Sc., SJTU, China)

INEXACT INTERIOR-POINT METHODS FOR LARGE SCALE LINEAR AND CONVEX QUADRATIC SEMIDEFINITE PROGRAMMING LI LU. (B.Sc., SJTU, China) INEXACT INTERIOR-POINT METHODS FOR LARGE SCALE LINEAR AND CONVEX QUADRATIC SEMIDEFINITE PROGRAMMING LI LU (B.Sc., SJTU, China) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF MATHEMATICS

More information

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University, SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Rachid Benouahboun 1 and Abdelatif Mansouri 1

Rachid Benouahboun 1 and Abdelatif Mansouri 1 RAIRO Operations Research RAIRO Oper. Res. 39 25 3 33 DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri

More information

An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem

An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem An Augmented Lagrangian Dual Approach for the H-Weighted Nearest Correlation Matrix Problem Houduo Qi and Defeng Sun March 3, 2008 Abstract In [15], Higham considered two types of nearest correlation matrix

More information

Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems

Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems K. C. Toh, R. H. Tütüncü, and M. J. Todd May 26, 2006 Dedicated to Masakazu Kojima on the

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming.

Exploiting Sparsity in Primal-Dual Interior-Point Methods for Semidefinite Programming. Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION 1. INTRODUCTION

A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION 1. INTRODUCTION A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION GERARD AWANOU AND LEOPOLD MATAMBA MESSI ABSTRACT. We give a proof of existence of a solution to the discrete problem

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information