PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

Size: px
Start display at page:

Download "PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION"

Transcription

1 International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, ISSN: printed version); ISSN: on-line version) url: doi: 0.73/ijpam.v4i4.0 PAijpam.eu PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION M. El Ghami Faculty of Education and Arts Mathematics Section Nord University-Nesna 8700, Nesna, NORWAY Abstract: Recently, M. Bouafoa, et al. [5] Journal of optimization Theory and Applications, August, 06),investigated a new kernel function which differs from the self-regular kernel functions. The kernel function has a trigonometric Barrier Term. In this paper we generalize the analysis presented in the above paper for Semidefinit Optimization Problems SDO). It is shown that the interior-point methods based on this function for large-update methods, the iteration bound is improved significantly. For small-update interior point methods the iteration bound is the best currently known bound for primal-dual interior point methods. The analysis for SDO deviates significantly from the analysis for linear optimization. Several new tools and techniques are derived in this paper. AMS Subject Classification: 90C, 90C3 Key Words: interior-point, kernel function, primal-dual method semidefinite optimization, large update, small update. Introduction We consider the standard semidefinite optimization problem SDO) SDP) p = inf X {TrCX) : TrA ix) = b i i m),x 0}, and its dual problem SDD) Received: January 4, 07 Revised: March 7, 07 Published: June 7, 07 c 07 Academic Publications, Ltd. url:

2 798 M. El Ghami SDD) d = sup y,s { b T y : } m y i A i +S = C,S 0, where C and A i are symmetric n n matrices, b,y R m, and X 0 means that X is symmetric positive semidefinite and TrA) denotes the trace of A i.e., the sum of its diagonal elements). Without loss of generality the matrices A i are assumed to be linearly independent. Recall that for any two n n matrices, A and B their natural inner product is given by TrA T B) = A ij B ij. j= IPMs provide a powerful approach for solving SDO problems. A comprehensive list of publications on SDO can be found in the SDO homepage maintained by Alizadeh []. Pioneering works are due to Alizadeh [, ] and Nesterov et al[]. Most IPMs for SDO can be viewed as natural extensions of IPMs for linear optimization LO), and have similar polynomial complexity results. However, to obtain valid search directions is much more difficult than in the LO case. In the sequel we describe how the usual search directions are obtained for primal-dual methods for solving SDO problems. Our aim is to show that the kernel-function-based approach that we presented for LO in [7] can be generalized and applied also to SDO problems... Classical search direction We assume thatsdp) andsdd) satisfy the interior-point conditionipc), i.e., there exists X 0 0 and y 0,S 0 ) with S 0 0 such that X 0 is feasible for SDP) and y 0,S 0 ) is feasible for SDD). Moreover, we may assume that X 0 = S 0 = E, where E is the n n identity matrix []. Assuming the IPC, one can write the optimality conditions for the primal-dual pair of problems as follows. TrA i X) = b i, i =,...,m m y i A i +S = C ) XS = 0 X,S 0. The basic idea of primal-dual IPMs is to replace the complementarity condition

3 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT XS = 0 by the parameterized equation XS = µe; X,S 0, where µ > 0. The resulting system has a unique solution for each µ > 0. This solution is denoted by Xµ),yµ),Sµ)) for each µ > 0; Xµ) is called the µ-center of SDP) and yµ),sµ)) is the µ-center of SDD). The set of µ- centers with µ > 0) defines a homotopy path, which is called the central path of SDP) and SDD) [, 3]. The principal idea of IPMs is to follow this central path and approach the optimal set as µ goes to zero. Newton s method amounts to linearizing the system ), thus yielding the following system of equations. TrA i X) = 0, i =,...,m m y i A i + S = 0 ) X S + XS = µe XS. This so-called Newton system has a unique solution X, y, S). Note that S is symmetric, due to the second equation in ). However, a crucial point is that X may be not symmetric. Many researchers have proposed various ways of symmetrizing the third equation in the Newton system so that the new system has a unique symmetric solution. All these proposals can be described by using a symmetric nonsingular scaling matrix P and by replacing ) by the system TrA i X) = 0, i =,...,m m y i A i + S = 0 3) X +P SP T = µs X Now X is automatically a symmetric matrix... Nesterov-Todd direction In this paper we consider the symmetrization schema of Nesterov-Todd [4]. So we use P = X X SX ) X = S S XS ) S,

4 800 M. El Ghami where the last equality can be easily verified. Let D = P, where P denotes the symmetric square root of P. Now, the matrix D can be used to scale X and S to the same matrix V, namely [, 5]: V := µ D XD = µ DSD. 4) Obviously the matrices D and V are symmetric, and positive definite. Let us further define Ā i := µ DA i D, i =,,...,m; and D X := µ D XD ; D S := µ D SD. 5) WerefertoD X andd S asthescaledsearchdirections. Now3)canberewritten as follows: TrĀiD X ) = 0, i =,...,m. m y i Ā i +D S = 0, 6) D X +D S = V V. In the sequel, we use the following notational conventions. Throughout this paper, denotes the -norm of a vector. The nonnegative and the positive orthants are denoted as R n + and intrn +, respectively, and Sn, S n +, and intsn + denote the cone of symmetric, symmetric positive semidefinite and symmetric positivedefiniten nmatrices, respectively. ForanyV S n, wedenotebyλv) the vector of eigenvalues of V arranged in increasing order, λ V) λ V),...,λ n V). For any square matrix A, we denote by η A) η A),..., η n A) the singular values of A; if A is symmetric, then one has η i A) = λ i A), i =,,...,n. If z R n and f : R R, then f z) denotes the vector in R n whose i-th component is f z i ), with i n, and if D is a diagonal matrix, then fd) denotes the diagonal matrix with fd ii ) as i diagonal component. For X S n, X = Q DQ, where Q is orthogonal, and D a diagonal matrix, fx) = Q fd)q. Finally ifv isavector, diagv) denotesthediagonal matrix with the diagonal elements v i.. New search direction In this section we introduce the new search direction. But we start with the definition of a matrix function [6, 7].

5 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT Definition. Let X be a symmetric matrix, and let X = Q X diagλ X),λ X),...,λ n X))Q X, be an eigenvalue decomposition of X, where λ i X), i n denotes the i- th eigenvalue of X, and Q X is orthogonal. If ψt) is any univariate function whose domain contains {λ i X); i n} then the matrix function ψx) is defined by ψx) = Q X diagψλ X)),ψλ X)),...,ψλ n X)))Q X. and the scalar function ΨX) is defined as follows [3]: ΨX) := ψλ i X)) = TrψX)). 7) The univariate function ψ is called the kernel function of the scalar function Ψ. In this paper, when we use the function ψ ) and its first three derivatives ψ ), ψ ), and ψ ) without any specification, it denotes a matrix function if the argument is a matrix and a univariate function from R to R) if the argument is in R. Analogous to the case of LO, the kernel-function-based approach to SDO is obtained by modifying Nesterov-Todd direction [3]. Theobservationunderlyingourapproachisthattheright-handsideV V in the third equation of 6) is precisely ψ V) if ψt) = t )/ logt, the latter being the kernel function of the well-known logarithmic barrier function. Note that this kernel function is strictly convex and nonnegative, whereas its domain contains all positive reals and it vanishes at. As we will now show any continuously differentiable kernel function ψt) with these properties gives rise to a primal-dual algorithm for SDO. Given such a kernel function ψt) we replace the right-hand side V V in the third equation of 6) by ψ V), with ψ V) defined according to Definition. Thus we use the following system to define the scaled) search directions D X an D S : TrĀiD X ) = 0, i =,...,m. m y i Ā i +D S = 0 8) D X +D S = ψ V).

6 80 M. El Ghami Having D X and D S, X and S can be calculated from 5). Due to the orthogonality of X and S, it is trivial to see that D X D S, and so TrD X D S ) = TrD S D X ) = 0. 9) The algorithm considered in this paper is described in Figure. The inner Generic Primal-Dual Algorithm for SDO Input: a kernel function ψt); a threshold parameter τ ; an accuracy parameter ǫ > 0; a barrier update parameter θ, 0 < θ < ; begin X := E; S := E; µ := ; V := E; while nµ ǫ do begin µ := θ)µ; V := V θ ; while ΨV) τ do begin Find search directions by solving system 8); Determine a step size α; X = X +α X; y = y +α y; S = S +α S; Compute V from 4); end end end Figure : Generic primal-dual interior-point algorithm for SDO. while loop in the algorithm is called inner iteration and the outer while loop outer iteration. So each outer iteration consists of an update of the barrier parameter and a sequence of one or more inner iterations. Note that by using

7 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT the embedding strategy [], we can initialize the algorithm with X = S = E. Since then XS = µe for µ = it follows from 4) that V = E at the start of the algorithm, whence ΨV) = 0. We then decrease µ to µ := θ)µ, for some θ 0,). In general this will increase the value of ΨV) above the threshold value τ. To get this value smaller again, and coming closer to the current µ-center, we solve the scaled search directions from 8), and unscale these directions by using 5). By choosing an appropriate step size α, we move along the search direction, and construct a new triple X +,y +,S + ) with X + = X +α X y + = y +α y S + = S +α S. 0) If necessary, we repeat the procedure until we find iterates such that ΨV) no longer exceed the threshold value τ, which means that the iterates are in a small enough neighborhood of Xµ), yµ), Sµ)). Then µ is again reduced by thefactor θ andwe applythesame proceduretargeting at thenew µ-centers. This process is repeated until µ is small enough, i.e. until nµ ǫ. At this stage we have found an ǫ-solution of SDP) and SDD). Just as in the LO case, the parameters τ,θ, and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by the algorithm is as small as possible. Obviously, the resulting iteration bound will depend on the kernel function underlying the algorithm, and our main task becomes to find a kernel function that minimizes the iteration bound. The rest of the paper is organized as follows. In Section 3 we introduce the kernel function ψt) considered in this paper and discuss some of its properties that are needed in the analysis of the corresponding algorithm. In Section 4 we derive the properties of the barrier function ΨV). The step size α and the resulting decrease of the barrier function are discussed in Section 5. The total iteration bound of the algorithm and the complexity results are derived in Section 6. Finally, some concluding remarks follow in Section Our kernel function and some of its properties Recently in [5] investigated new kernel functions with trigonometric barrier for linear optimization. The extension to P κ)-linear complementarity problem was also presented in [0]. The obtained complexity for large-update improve significantly the complexity obtained in [6, 7]. In this paper we consider kernel functions of the form

8 804 M. El Ghami ψt) = t + 4 pπ tanp ht)) ), ) with ht) = π t+, and to show that the interior-point methods for SDO based on these function have favorable complexity results. Note that the growth term of our kernel function is quadratic. However, this function ) deviates from all) other) kernel functions since its barrier term 4 is trigonometric as pπ tan p π t+. In order to study the new kernel function, several new arguments had to be developed for the analysis. This section is started by technical lemma, and then some properties of the new kernel function introduced in this paper are derived. 3.. Some technical results In the analysis of the algorithm based on ψt) we need its first three derivatives. These are given by with ψ t) = t+ 4h t) π sec ht)) tan p ht)) ), ) ψ t) = + 4 π sec ht))gt). 3) ψ t) = 4 π sec ht)) kt)h t) 3 +rt)h t)h t) ) h t) 4) + tan p ht))h t) ), 5) gt) := p )tan p ht))+p+)tan p ht)) ) h t) + h t)tan p ht)), and kt) := p )p )tan p 3 ht))+p tan p ht)) + p+)p+)tan p+ ht)), rt) := 3p )tan p ht))+3p+)tan p ht)), 6) and the first three derivatives of h are given by h t) = π t+) ; h t) = π t+) 3; h t) = 3π t+) 4

9 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT The next lemma serves to prove that the new kernel function ) is eligible. Lemma Lemma 3. in [5]). Let ψ be as defined in ) and t > 0. Then, ψ t) >, tψ t)+ψ t) > 0, tψ t) ψ t) > 0, and ψ t) < 0. 7-a) 7-b) 7-c) 7-d) It follows that ψ) = ψ ) = 0 and ψ t) 0, proving that ψ is defined by ψ t). ψt) = t ξ ψ ζ)dζdξ. 8) Thesecondproperty7-b)inLemmaisrelatedtoDefinition..andLemma.. in [3]. This property is equivalent to convexity of the composed function z ψe z ) and this holds if and only if ψ t t ) ψt )+ψt )) for any t,t 0. Following [3], we therefore say that ψ is exponentially convex, or shortly, e-convex, whenever t > 0. Lemma 3. Let ψ be as defined in ), one has ψt) < ψ )t ), if t >. Proof. By Taylor s theorem and ψ) = ψ ) = 0, we obtain ψt) = ψ )t ) + 6 ψ ξ)ξ ) 3, where < ξ < t if t >. Since ψ ξ) < 0, the lemma follows. Lemma 4. Let ψ be as defined in ), one has tψ t) ψt), if t. Proof. Defining gt) := tψ t) ψt) one has g) = 0 and g t) = tψ t) 0. Hence gt) 0 and the lemma follows. At some places below we apply the function Ψ to a positive vector v. The interpretation of Ψv) is compatible with Definition when identifying the vector v with its diagonal matrix diagv). When applying Ψ to this matrix we obtain Ψv) = ψv i ), v intr n +.

10 806 M. El Ghami 4. Properties of ΨV ) and δv ) In this section we extend Theorem 4.9 in [4] to the cone of positive definite matrices. The next theorem gives a lower bound on the norm-based proximity measure δv), defined by δv) = ψ V) = n ψ λ i V)) = D X +D S, 9) in terms of ΨV). Since ΨV) is strictly convex and attains its minimal value zero at V = E, we have ΨV) = 0 δv) = 0 V = E. We denote by : [0, ) [, ) the inverse function of ψt) for t. In other words s = ψt) t = s), t, 0) Theorem 5. Let be as defined in 0). Then δv) ψ ΨV))). Proof. If V = E then δv) = ΨV) = 0. Since 0) = and ψ ) = 0, the inequality holds with equality if V = E. Otherwise, by the definitions of δv) in 9) and ΨV) in 7), we have δv) > 0 and ΨV) > 0. Let v i := λ i V), i n. Then v > 0 and δv) = n ψ λ i V)) = n ψ v i ). Since ψt) satisfies 7-d) we may apply Theorem 4.9 in [4] to the vector v. This gives )) δv) ψ ψv i ). Since ψv i ) = ψλ i V)) = ΨV), the proof of the theorem is complete.

11 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT Lemma 6. If ΨV), then δv) 6 ΨV). ) Proof. The proof of this lemma uses Theorem 5 and Lemma 4. Putting s = ΨV), we obtain from Theorem 5 that Putting t = s), we have by 0), ψt) = t δv) ψ s)). + 4 pπ tanp ht)) ) = s, with ht) = π t+, t. We derive an upper bound for t, as this suffices for our goal. One has from 8) and ψ t), which implies s = ψt) = t ξ ψ ζ)dζdξ t ξ dζdξ = t ), t = s) + s. ) Assuming s, we get t = s) s+ s 3s. Now applying Lemma 4 we may write δv) ψ s)) ψ s)) s) = s s) 6 s = 6 ΨV). This proves the lemma. Note that since τ we have at the start of each inner iteration that ΨV). Substitution in ) gives δv) 6. 3) 5. Analysis of the algorithm In the analysis of the algorithm the concept of exponential convexity [4, 8] is again a crucial ingredient. In this section we derive a default value for the step

12 808 M. El Ghami size and we obtain an upper bound for the decrease in ΨV) during a Newton step. 5.. Three technical lemmas The next lemma is cited from [6, Lemma c)]. Lemma 7. Let A,B S n be two nonsingular matrices and ft) be given real-valued function such that fe t ) is a convex function. One has fη i AB)) fη i A)η i B)), where η i A), and η i B) i =,,...,n denote the singular values of A and B respectively Lemma 8. Let A, A+B S n +, then one has λ i A+B) λ λ n B), i =,,...,n. Proof. It is obvious that λ i A + B) λ A + B). By the Rayleigh-Ritz theorem see [8]), there exists a nonzero X 0 R n, such that λ A+B) = XT 0 A+B)X 0 X T 0 X 0 = XT 0 AX 0 X T 0 X 0 + XT 0 BX 0 X0 TX. 0 We therefore may write λ A+B) XT 0 AX 0 X T 0 X 0 X T AX min X 0 X0 TBX 0 X0 TX 0 X T X max X 0 X T BX X T X = λ λ n B). This completes the proof of the lemma. A consequence of condition 7-b) is that any eligible kernel function is exponentially convex [3, Eq..0)]: ψ t t ) ψt )+ψt )), t > 0, t > 0. 4) This implies the following Lemma, which is crucial for our purpose. Lemma 9. Let V and V be two symmetric positive definite matrices, then ) Ψ V V V ) ΨV )+ΨV )), V 0, V 0.

13 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT Proof. For any nonsingular matrix U S n, we have η i U) = λ i U T U) ) = λ i UU T ) ), i =,,...,n. Taking U = V V, we may write η i V V ) = λ i V V V ) ) = λi V V V ) ), i =,,...,n. Since V and V are symmetric positive definite, using Lemma 7 one has ) Ψ V V V ) = ) ψ η i V V ) ) ψ η i V )η i V ). Since η V ),η V ) > 0 we may use that ψt) satisfies 7-b) for t > 0. Using 4), hence we obtain ) Ψ V V V ) = This completes the proof. )) ψ ηi V )+ψ ) ηi V ) ψλ i V ))+ψλ i V ))) = ΨV )+ΨV )). 5.. The decrease of the proximity in the inner iteration In this subsection we are going to compute a default value for the step size α in order to yield a new triple X +,y +,S + ) as defined in 0). After a damped step, using 5) we have X + = X +α X = X +α µdd X D = µdv +αd X )D, y + = y +α y, S + = S +α S = X +α µd D S D = µd V +αd S )D. Denoting the matrix V after the step as V +, we have V + = µ D X + S + D ).

14 80 M. El Ghami Note that V + is unitarily similar to the matrix µ X + S + X + and hence also to V +αd X ) V +αds )V +αd X ). This implies that the eigenvalues of V + are the same as those of the matrix Ṽ + := V +αd X ) V +αds )V +αd X ) ). The definition of ΨV) implies that its value depends only on the eigenvalues of V. Hence we have ) Ψ Ṽ+ = ΨV + ). Our aim is to find α such that the decrement fα) := ΨV + ) ΨV) = Ψ is as small as possible. Due to Lemma 9, it follows that ) Ψ Ṽ+ = Ψ Ṽ+ ) ΨV), 5) V +αd X ) V +αds )V +αd X ) [ΨV +αd X)+ΨV +αd S )]. From the definition 5) of fα), we now have fα) f α), where f α) := [ΨV +αd X)+ΨV +αd S )] ΨV). ) ) Note that f α) is convex in α, since Ψ is convex. Obviously, f0) = f 0) = 0. Taking the derivative with respect to α, we get f α) = Tr ψ V +αd X )D X +ψ V +αd S )D S ). Using the last equality in 8) and also 9), this gives f 0) = Tr ψ V)D X +D S ) ) = Tr ψ V) ) = δv). Differentiating once more, we obtain f α) = Tr ψ V +αd X )D X +ψ V +αd S )D S). 6) In the sequel we use the following notation: Lemma 0. One has λ := minλ i V)), δ := δv). f α) δ ψ λ αδ).

15 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT... 8 Proof. Thelast equality in8) and9) implythat D X +D S = D X + D S = 4δ. Thus we have λ n D X ) δ and λ n D S ) δ. Using Lemma 8 and V +αd X 0, As a consequence we have, for each i, λ i V +αd X ) λ α λ n D X ) λ αδ, λ i V +αd S ) λ α λ n D S ) λ αδ. Due to 7-d), ψ is monotonically decreasing. So the above inequalities imply that ψ λ i V +αd X )) ψ λ αδ), ψ λ i V +αd S )) ψ λ αδ). Substitution into 6) gives f α) ψ λ αδ)tr DX ) +D S = ψ λ αδ) D X + D S ). Now, using that D X and D S are orthogonal, by 9), and also D X +D S = 4δ, by 9), we obtain f α) δ ψ λ V) αδ). This proves the lemma. Using the notation v i = λ i V), i n, again, we have f α) δ ψ v αδ), 7) which is exactly the same inequality as Lemma 3. in [7]. This means that our analysis closely resembles the analysis of the LO case in [7]. From this stage on we can apply similar arguments as in the LO case. In particular, the following two lemmas can be stated without proof. Lemma. [Lemmas 3.3 and 3.4 in [9]] Let ρ be the inverse function of ψ t) for t 0,]. Then the largest value of the step size α satisfying 7) is given by ˆα := δ [ρδ) ρδ)]. Moreover For future use we define ˆα α := ψ ρδ)). ψ ρδ)). 8) By Lemma this step size satisfies 7).

16 8 M. El Ghami Lemma. If the step size α is such that α ˆα then fα) αδ. Using the above lemmas from [7] we proceed as follows. Theorem 3. Let ρ be as defined in Lemma and α as in 8) and Ψv). Then δ p +p f α) ψ ρδ)) δ 30p. Proof. Since α ˆα, Lemma gives f α) αδ, where α = ψ ρδ)) as defined in 8). Thus the first inequality follows. To obtain the inverse function t = ρs) of ψ t) for t 0,], we need to solve t from the equation This implies, = t+ 4h t) sec ht)) tan p ht)) )) π t+ 4h t) csc ht)) tan p+ ht)) )) = s. π csc ht)) tan p+ ht)) ) = π 4h t) s+t). For t, we get πt+) 4π s+t) s+). Hence, putting t = ρδ), which is equivalent to 4δ = ψ t). Using that sin ht)) we get tanht)) 8δ +) +p. 9) Sincesec ht)) = +tan ht)), By9), thuswehavetan ht)) 8δ +) +p, tan p ht)) 8δ +) p +p, tan p ht)) 8δ +) p +p and tan p ht)) 8δ +) +p. p Since h t) = 8π 3π 8t+) 3 4, and h t) 4π = π 6t+) 4 4 for all 0 t, and using also 8δ +) this implies ψ t) + 4π )) p π 4 +π 8δ +) p+ +p = 9+4pπ)8δ +) p+ +p

17 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT By 8), thus we have α = ψ ρδ)) 9+4pπ)8δ +) p+ +p. Hence Also using 3) i.e., 6δ ) and p we get, α = Thus the theorem follows. Substitution in ) gives 9+4pπ)8δ +δ) p+ +p 9+4pπ)0δ) p+ +p f α) δ 30pδ p+ +p 30pδ p+ +p = δ p +p 30p.. f α) δ p +p 30p Ψ p +p) 30p6) p +p Ψ p +p) 790p A uniform upper bound for Ψ In this subsection we extend Theorem 3. in [4] to the cone of positive definite matrices. As we will see the proof of the next theorem easily follows from Theorem 3. in [4]. Theorem 4. Let be as defined in 0). Then for any positive vector v and any β > we have: ΨβV) nψ β ΨV) n )). Proof. Let v i := λ i V), i n. Then v > 0 and ΨβV) = ψλ i βv)) = ψβλ i V)) = ψβv i ) = Ψβv).

18 84 M. El Ghami Due to the fact that ψt) satisfies 7-c), at this stage we may use Theorem 3. in [4], which gives )) Ψv) Ψβv) nψ β. n Since Ψv) = ψv i ) = ψλ i V)) = ΨV), the theorem follows. Before the update of µ we have ΨV) τ, and after the update of µ to θ)µ we have V + = V θ. Application of Theorem 4, with β = θ, yields that τ ) ) n ΨV + ) nψ. θ Therefore we define τ ) ) n L = Ln,θ,τ) := nψ θ 30) In the sequel the value Ln,θ,τ) is simply denoted as L. A crucial but trivial) observation is that during the course of the algorithm the value of ΨV) will never exceed L, because during the inner iterations the value of Ψ always decreases. 6. Complexity We are now ready to derive the iteration bounds for large-update methods. An upper bound for the total number of inner) iterations is obtained by multiplying an upper bound for the number of inner iterations between two successive updates of µ by the number of barrier parameter updates. The last number is bounded above by cf. [9, Lemma II.7, page 6]) θ log n ǫ. To obtain an upper bound K for the number of inner iterations between two successive updates we need a few more technical lemmas.

19 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT The following lemma is taken from Proposition.3. in [3]. Its relevance is due to the fact that the barrier function values between two successive updates of µ yield a decreasing sequence of positive numbers. We will denote this sequence as Ψ 0, Ψ,... Lemma 5. Let t 0,t,,t K beasequence of positive numberssuch that t k+ t k κt γ k, k = 0,,,K,. where κ > 0 and 0 < γ. Then K Lemma 6. If K denotes the number of inner iterations between two successive updates of µ, then t γ 0 κγ K 790pΨ +p +p) 0. Proof. The definition of K implies Ψ K > τ and, according to Theorem 3, Ψ K τ and Ψ k+ Ψ k κψ k ) γ, k = 0,,,K, with κ = +p 790p and γ = +p). Application of Lemma 5, with t k = Ψ k yields the desired inequality. Using ψ 0 L, where the number L is as given in 30), and Lemma 6 we obtain the following upper bound on the total number of iterations: 790pL +p +p) θ 6.. Large-update log n ǫ. 3) We just established that 3) is an upper bound for the total number of iterations, using ψt) = t and ), by substitution in 30) we obtain L n + 4 ) ) π tan p, for t, p pπ t+ τ n) θ ) n θ+ τ θ) n + τ ) n

20 86 M. El Ghami = θn+ τn+τ ) θ). Using 3), thus the total number of iterations is bounded above by K θ log n ǫ θ 790p θ) +p +p) ) θn+ ) +p +p) τn+τ log n ǫ. A large-update methods uses τ = ) On) and θ = Θ). The right-hand side expression is then O pn +p +p) log n ǫ, as easily may be verified. 6.. Small-update For small-update methods one has τ = O) and θ = Θ n ). Using Lemma 3, with ψ ) = pπ+8 4, we then obtain L npπ +8) 8 ρ τ n) θ ). Using ), then L npπ +8) 8 + τ n θ. Using θ θ = + pπ+8) ). θ, this leads to L θ 8 θ) θ n+ τ We conclude that the total number of iterations is bounded above by K θ log n ǫ 790pπ +8) +p +p) θ8 θ)) +p +p) θ n+ )+p +p τ log n ǫ. Thus the right-hand side expression is then O nlog n ǫ). 7. Concluding Remarks In this paper we extended the results obtained for kernel-function-based IPMs in [5] for LO to semidefinite optimization problems. The analysis in this paper is new and different from the one using for LO. Several new tools and techniques are derived in this paper. The proposed function has a trigonometric barrier term but the function is not logarithmic and not self- regular.

21 PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT For this parametric kernel function, we have shown that the best result of iteration bounds for large-update methods and small-update can be achieved, namely O logn nlog n ǫ), for large-update and O nlog n ǫ) for small-update methods. References [] F. Alizadeh, Combinatorial Optimization with Interior Point Methods and Semi-Definite Matrices, PhD thesis, University of Minnesota, Minneapolis, Minnesota, USA 99). [] F. Alizadeh, Interior point methods in semidefinite programming with applications to combinatorial optimization. SIAM Journal on Optimization, 5 No 995), 3 5, doi: 0.37/ [3] Y. Q. Bai, M. El Ghami, C. Roos, A new efficient large-update primal-dual interiorpoint method based on a finite barrier. SIAM Journal on Optimization, 3 No 3 003), , doi: 0.37/S [4] Y. Q. Bai, M. El Ghami, C. Roos, A comparative study of kernel functions for primaldual interior-point algorithms in linear optimization. SIAM Journal on Optimization, 5 No 004), 0 8, doi: 0.37/S [5] M. Bouafia, D. Benterki, A. Yassine, An efficient primal-dual interior point method for linear programing problems based on a new kernel function with a trigonometric barrier term. Journal of Optimization Theory and Applications, 70 No 06), , doi: 0.007/s [6] X.Z. Cai, G.Q. Wang, M. El Ghami, Y.J. Yue, Complexity analysis of primal-dual interior-point methods for linear optimization based on a parametric kernel function with a trigonometric barrier term. Abstr. Appl. Anal. 04), 7058, doi: 0.55/04/7058. [7] M. El Ghami, Z.A. Guennoun, S. Bouali, T. Steihaug, Primal-Dual Interior-Point Methods for Linear Optimization Based on a Kernel Function with Trigonometric Barrier Term. Journal of Computational and Applied Mathematics, 36 No 5 0), , doi: 0.06/j.cam [8] M. El Ghami, A kernel function approach for interior point methods: Analysis and Implementation. LAP Lambert Academic Publishing, Germany, NP. 68, ISBN-NR: ). [9] M. El Ghami, C. Roos, Generic Primal-dual Interior Point Methods Based on a New Kernel Function. International Journal RAIRO-Operations Research, 4 No 008), 99-3, doi: 0.05/ro: [0] M. El Ghami, G.Q. Wang, Interior-pointmethods for P κ)-linear complementarity problem based on generalized trigonometric barrier functio. International Journal of Applied Mathematics, 30 No 07), -33, doi: 0.73/ijam.v30i.. [] Y.E. Nesterov, A.S. Nemirovskii, Interior Point Polynomial Methods in Convex Programming : Theory and Algorithms, SIAM, Philadelphia, USA 993). [] E. de Klerk, Aspects of Semidefintie Programming, volume 65 of Applied Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands 00).

22 88 M. El Ghami [3] J. Peng, C. Roos, T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior- Point Algorithms, Princeton University Press 00). [4] Y.E. Nesterov, M.J., Todd, Self-scaled barriers and interior-point methods for convex programming. Mathematics of Operations Research, No 997), 4. [5] J.F. Sturm, S. Zhang, Symmetric primal-dual path following algorithms for semidefinite programming. Applied Numirical Mathematics, 9 999), 30 35, doi: 0.06/S ) [6] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK 985). [7] W. Rudin, Principles of Mathematical Analysis, Mac-Graw Hill Book Company, New York 978). [8] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press 99). [9] C. Roos, T. Terlaky, J.-Ph. Vial, Theory and Algorithms for Linear Optimization. An Interior-Point Approach, Springer Science 005).

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

On implementing a primal-dual interior-point method for conic quadratic optimization

On implementing a primal-dual interior-point method for conic quadratic optimization On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables

More information

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

The extreme rays of the 5 5 copositive cone

The extreme rays of the 5 5 copositive cone The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information