A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Size: px
Start display at page:

Download "A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal"

Transcription

1 Journal of Mathematical Modeling Vol. 4, No., 206, pp JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar Djeffal b a Department of Mathematics, University of Batna 2, Batna, Algeria djeffal elamir@yahoo.fr b Department of Mathematics, University of Batna 2, Batna, Algeria lakdar djeffal@yahoo.fr Abstract. In this paper, we deal to obtain some new complexity results for solving semidefinite optimization SDO problem by interior-point methods IPMs. We define a new proximity function for the SDO by a new kernel function. Furthermore we formulate an algorithm for a primal dual interior-point method IPM for the SDO by using the proximity function and give its complexity analysis, and then we show that the worst-case iteration bound for our IPM is O6m + 3m+4 m > 4. m+2 2m+ 2m+ Ψ0 θ nµ0 log ε, where Keywords: quadratic programming, convex nonlinear programming, interior point methods. AMS Subject Classification: 90C25, 90C30, 90C5, 90C20. Introduction Infeasible interior-point methods IIPMs start with an arbitrary positive point and feasibility is reached as optimality is approached. The choice of the starting point in IPMs is crucial for the performance. Lustig [8] and Tanabe [6] were the first to present IPMs for LO. Kojima et al. [6] were the first that proved the global convergence of a primal dual IPM for LO. Corresponding author. Received: 3 December 205 / Revised: 28 May 206 / Accepted: 29 May 206. c 206 University of Guilan

2 36 El Amir Djeffal and Lakhdar Djeffal Zhang [8] was the first who presented a primal dual IPM with polynomial iteration complexity for LO. Primal dual interior-point methods IPMs for semidefinite optimization have been widely studied, the reader is referred to Klerk [4]. Recently a full-newton step infeasible interior-point algorithm for linear programming LP was presented by Roos [4]. The result of polynomial complexity coincides with the best known one for IIPMs. Mansouri and Roos [9, 0] extended this algorithm to semidefinite optimization by using a specific feasibility step. The barrier function is determined by a simple univariate function, called its kernel function. Bai et al. [2] introduced a new barrier function which is not a barrier function in the usual sense. In this paper, we define a new proximity function for the SDO by a new kernel function. Also, we formulate an interior-point algorithm for SDO by using a new proximity function and give its complexity analysis, and then we show that the worst-case iteration bound for our IPM is O6m + 3m+4 m+2 2m+ 2m+ Ψ0 θ log nµ0 ε. Furthermore, the complex- 2m log T rx0 S 0 and ity bounds obtained by the algorithm are Om 3m+ 2m n m+ n log T rx 0 S 0 Om 3m+ 2m ε, for large and small-update methods, respectively. The paper is organized as follows. In Section 2, we recall the preliminaries. In Section 3, we define a new kernel function and give its properties which are essential for the complexity analysis. In Section 4, we derive the complexity result for both large-update and small-update methods. Finally, concluding remarks are given in Section 5. Some of the notations used throughout the paper are as follows: R n, R n + and R n ++ denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively. R n n denotes the set of n n real matrices.. F and. 2 denote the Frobenius norm and the spectral norm for matrices, respectively. S n, S n + and S n ++ denote the cone of symmetric, symmetric positive semidefinite and symmetric positive definite n n matrices, respectively. E denotes n n identity matrix. The Löwner partial order or on positive semidefinite or positive definite matrices means A B or A B if A B is positive semidefinite or positive definite. We use the matrix inner product A B = T ra T B. For any Q S n ++, the expression Q 2 or Q denotes its symmetric square root. For any V S n +, we define λ min V to be the minimal eigenvalue of V. ε

3 A path following IPMs for SDO problem based on new kernel function 37 2 Preliminaries 2. The central path We consider the semidefinite optimization problem in the following form: min {T rcx : T ra i X = b i, i =,..., m, X 0}, SDO and its dual max { b t y : } m y i A i + S = C, S 0, SDD i= where A i R m n, i =,..., m and C are symmetric n n matrices, and b, y R m. Throughout the paper, we make the following assumptions: Assumption : The matrices A i R m n, i =,..., m are linearly independent. Assumption 2: The initial iterate X 0, y 0, S 0 is strictly feasible: T ra i X 0 = b i, i =,..., m, X 0 0, m yi 0 A i + S 0 = C, S 0. i= We have the following well-known lemma: Lemma. [3] The following statements are equivalent: X 0, S 0 and T rxs = 0, 2 X 0, S 0 and X 2 S 2 2 = 0, 3 X 0, S 0 and XS = 0. It is well known that finding an optimal solution X, y, S of SDO and SDD is equivalent to solve the following system: T ra i X = b i, i =,..., m, X 0, m y i A i + S = C, S 0, i= XS = 0. The basic idea of primal dual IPMs is to replace the third equation in, the so-called complementarity condition for SDO and SDD, by the

4 38 El Amir Djeffal and Lakhdar Djeffal parameterized equation XS = µe with µ > 0, where E denotes the n n identity matrix. Thus we consider the system: T ra i X = b i, i =,..., m, X 0, m y i A i + S = C, S 0, i= XS = µe. If both SDO and SDD satisfy IPC, then for each µ > 0 the parameterized system 2 has a unique solution Xµ, yµ, Sµ see [7, 7], which is called µ-center of SDO and SDD. The set of µ-centers, that is, Λ = {Xµ, yµ, Sµ /µ > 0}, is called the central path of SDO and SDD. The central path converges to the solution pair of SDO and SDD as reduces to zero [, 5]. In general, IPMs for the SDO consist of two strategies: The first one, which is called the inner iteration scheme, is to keep the iterative sequence in a certain neighborhood of the central path or to keep the iterative sequence in a certain neighborhood of the µ-center and the second one is called the outer iteration scheme, is to decrease the parameter µ to µ + = θµ, for some θ 0, The search directions IPMs follow the central path approximately. We briefly describe the usual approach. Without loss of generality, we assume that Xµ, yµ, Sµ is known for some positive µ. For example, due to the above assumption, we may assume this for µ 0 = T rx0 S 0 n, with X 0 0 and S 0 0. We then decrease µ to µ + = θµ, for some θ 0,, and solve the following Newton system: T ra i X = 0, i =,..., m, m y i A i + S = 0, i= X S + XS = µe XS. We consider the symmetrization scheme that yields the Nesterov-Todd NT direction, let us define the matrix: P = X 2 X 2 SX 2 2 X 2 = S 2 S 2 XS 2 2 S 2, 4 and also define D = P 2, the matrix D can be used to scale X and S to same matrix V because 3

5 A path following IPMs for SDO problem based on new kernel function 39 V = µ D XD = µ DSD. 5 Note that the matrices D and V are symmetric and positive semidefinte, similary to the SDO [0]. We can conclude that the system has a unique symmetric solution, let use define A i := µ DA i D, i =,..., m, D X := µ D XD, D S := µ D SD. The NT search direction are obtained from the system: T ra i D X = 0, i =,..., m, m y i A i + D S = 0, i= D X + D S = V V = Ψ l V. 6 7 Clearly, T rd X D S = 0 which is concluded from the first and second equations of 7 or from the orthogonality of X and S. The classical kernel function defined as follows: ψ l t = 2 t2 log t. 2.3 The generic interior-point algorithm for SDO We call ψ l t the kernel function of the logarithmic barrier function Ψ l V. In this paper, we replace ψ l t with a new kernel function ψt which is defined in the next Section and assume that τ. The new interior-point algorithm works as follows. Assume that we are given a strictly feasible point X, y, S which is in a τ-neighborhood of the given µ-center. Then we decrease µ to µ + = θµ, for some fixed θ 0, and then we solve the Newton system 3 to obtain the unique search direction. The positivity condition of a new iterate is ensured with the right choice of the step size α which is defined by some line search rule. This procedure is repeated until we find a new iterate X +, y +, S + that is in a τ-neighborhood of the µ + -center. Then µ is again reduced by the factor θ and we solve the Newton system targeting at the new µ + -center, and so on. This process is repeated until µ is small enough, i.e., nµ ε. The parameters τ, θ and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations

6 40 El Amir Djeffal and Lakhdar Djeffal required by the algorithm is as small as possible. The choice of the socalled barrier update parameter θ plays an important role both in theory and practice of IPMs. Usually, if θ is a constant independent of the dimension n of the problem, for instance θ = 2, then we call the algorithm a large-update or long-step method. If θ depends on the dimension of the problem, such as θ = n, then the algorithm is named a small-update or short-step method. The algorithm for our primal-dual IPM for the SDO is given as follows: Primal-Dual IPM for the SDO Begin algorithm Input: An accuracy parameter ε > 0, An update parameter θ, 0 < θ <, A threshold parameter τ, 0 < τ <, A strictly feasible point X 0, y 0, S 0 and µ 0 = T rx0 S 0 n such that ΨX 0 S 0, µ 0 τ. begin X := X 0, S := S 0, µ := µ 0, While nµ ε do begin mu = θµ While ΨV > τ do begin Solve system 3 to obtain X, y, S, Determine a step size α X := X + α X y := y + α y S := S + α S End While. End While End algorithm. In the next section, we define a new kernel function and give its properties which are essential to our complexity analysis.

7 A path following IPMs for SDO problem based on new kernel function 4 3 The new kernel function We call ψ : R ++ R + a kernel function if ψ is twice differentiable and satisfies the following conditions []: ψ = ψ = 0, ψ t > 0, 8 lim = lim t 0 +ψt ψt = 0. t For our IPM, we use the following new kernel function: ψt = m + t 2 m + 2t +, for all t > 0, 9 tm where m > 4. Then we have the following: ψ t = 2m + t m + 2 mt m, ψ t = 2m + m m t m 2, ψ t = m m m 2t m 3. From 8, ψt is clearly a kernel function and 0 ψ t > 2m +, for all t > 0. In this paper, we replace the function Ψ l V in 7 with the function ΨV as follows: D X + D S = ΨV, 2 where ΨV = T rψv = n ψλ i V, ψt is defined in 9. Hence, the i= new search direction X, y, S is obtained by solving the following modified Newton system: T ra i X = 0, i =,..., m, m y i A i + S = 0, 3 i= X S + XS = µv ΨV. Note that D X and D S are orthogonal because matrix D X belongs to null space and matrix D S to the row space of the matrix A i, i =,..., m. Since D X and D S are orthogonal, we have D X = D S = 0 ΨV = 0 V = E X = Xµ, S = Sµ. 4

8 42 El Amir Djeffal and Lakhdar Djeffal We use ΨV as the proximity function to measure the distance between the current iterate and the µ-center for given µ > 0. We also define the norm-based proximity measure, δv : R ++ R +, as follows: δxs, µ = δv = 2 ΨV = 2 D X + D S. 5 Lemma 2. For ψt, we have the following. i ψt is exponentially convex for all t > 0, ii ψ t is monotonically decreasing for all t > 0, iii tψ t ψ t > 0, for all t > 0. Proof. For i, by lemma in [], it suffices to show that the function ψt satisfies tψ t + ψ t 0, for all t > 0. Using 0, we have tψ t + ψ t = t 2m + m m t m 2 + 2m + t m + 2 mt m = 4m + t + m 2 t m m + 2. Let Then gt = 4m + t + m 2 t m m + 2. g t = 4m + m 2 m + t m 2, g t = m 2 m + m + 2t m 3 > 0, for all t > 0. Let g t = 0, we get t = m2 4 m+2. Since gt is strictly convex and has a global minimum, g m2 4 m+2 > 0. And by lemma in [3], we have the result. For ii, using 0, we have ψ For iii, using 0, we have t > 0, so we have the result. tψ t ψ t = mm + 2t m + m + 2 > 0, for all t > 0. Lemma 3. For ψt, we have the following. m + t 2 ψt 4m + ψ t 2, for all t > 0, 6 ψt m + m + 2 t 2, for all t. 7 2

9 A path following IPMs for SDO problem based on new kernel function 43 Proof. For 6, using 8 and, we have also, ψt = t ξ t ψ ζdζdξ 2m + ψt = = = = t ξ 2m + 2m + 2m + ψ ζdζdξ t ξ t t 4m + ψ t 2. For 7, using Taylor s Theorem, we have ξ dζdξ = m + t 2, ψ ξψ ζdζdξ ψ ξψ ξ dξ ψ ξ dψ ξ ψt = ψ + ψ t + 2 ψ t ψ ξξ 3 = 2 ψ t ψ ξξ 3 2 ψ t 2 = m + m + 2 t 2. 2 Now, we define γ : 0,,, be the inverse function of ψt for all t, and ρ : 0, 0,, be the inverse function of 2 ψ t for all t 0,. Then we have the following lemma. Lemma 4. For ψt, we have the following s s m + + γs +, s 0, 8 m +

10 44 El Amir Djeffal and Lakhdar Djeffal and m ρs 2s + m m+, s 0. 9 Proof. For 8, let s = ψt, t, i.e., γs = t, t, then we have m + t 2 = s + m + 2t t m. Because m + 2t t m is monotone increasing with respect to t, we have m + t 2 s + m +, which implies that By 6, we have so s t = γs m + +. s = ψt m + t 2, s t = γs + m +. For 9, let z = 2 ψ t for all t 0,. By the definition of ρ, we have ρz = t and 2z = ψ t. Then mt m = 2z + 2m + t m + 2. Because 2m+t m+2 is monotone increasing with respect to t 0,, we have mt m 2z + m, which implies that m ρz = t 2z + m m+. Lemma 5. Let γ : 0,,,be the inverse function of ψt for all t. Then we have ΨV ΨβV nψ βγ, V, β. 20 n Proof. Using Theorem 3.2 in [], we get the result. proof. This completes the

11 A path following IPMs for SDO problem based on new kernel function 45 Lemma 6. Let 0 θ, V + = θ V. If ΨV τ, then we have ΨV + m + m θ Proof. Since θ and γ ΨV, we have γ n nθ 2 τ +. 2 m + ΨV n θ lemma 5, with β = θ, 7, 8 and ΨV τ, we have ΨV ΨV + nψ γ θ m + m + 2 n 2 m + m + 2 = n 2 θ m + m + 2 n 2 θ m + m + 2 n 2 θ m + m + 2 n 2 θ n γ θ ΨV 2 n θ ΨV 2 γ n ΨV + m + n θ τ 2 θ + m + n nθ τ 2 +. m + n. Using 2 Denote m + m + 2 Ψ 0 = Ln, θ, τ = n 2 θ nθ τ 2 +, 22 m + n then Ψ 0 is an upper bound for ΨV during the process of the algorithm. 4 Analysis of algorithm The aim of this paper is to define a new kernel function and to obtain new complexity results for an SDO problem using the proximity function defined by the kernel function and following the approach of Bai et al. []. Using the concept of a matrix function [3], the definition of kernel function ψ can be extended to any diagonalizable matrix with positive eigenvalues. In particular, for a given eigen-decomposition

12 46 El Amir Djeffal and Lakhdar Djeffal V = Q V diag λ V, λ 2 V,..., λ n V Q V, of V with a nonsingular matrix Q V, the matrix function ψv is defined by ψv = Q V diag ψλ V, ψλ 2 V,..., ψλ n V Q V. 23 In this section, we compute a proper step size α and the decrease of the proximity function during an inner iteration and give the complexity results of the algorithm. 4. Determining a default step size Taking a step size α, we have new iterates X + = X + α X, y + = y + α y and S + = S + α S. Let X + = X I + α X = X I + α D X = X X V V V + αd X, S + = S I + α S = S I + α D S = S S V V V + αd S. So, we have V + = V + αd X 2 V + αds V + αd X 2 2. Since the proximity after one step is defined by ΨV + = Ψ V + αd X 2 V + αds V + αd X 2 2. By i in Lemma 2, we have ΨV + = Ψ V + αd X V + αd S 2, 2 Ψ V + αd X + ΨV + αd S. Define, for α > 0 fα = ΨV + ΨV. Therefore, we have fα f α, where

13 A path following IPMs for SDO problem based on new kernel function 47 Obviously, f α = 2 Ψ V + αd X + ΨV + αd ΨV. 24 f0 = f 0 = 0. Now, we deal with another concepts relevant to matrix functions in matrix theory [5]. Definition. [5] A matrix Xt is said to be a matrix of functions if each entry of Xt is a function of t, that is, Xt = [X ij t]. The concepts of continuity, differentiability and integrability naturally extended to matrix-valued functions of a scalar by interpreting them componentwise. Thus we can say that d dt Xt = d dt [X ijt] = X t. Suppose that the matrix-valued functions Ht and Gt are differentiable with respect to t. Then we have d d T rgt = T r dt dt Gt = T r G t, d dt GtHt = G tht + GtH t. For any function ψt, let us denote by ψt the divided difference of ψt : ψt, t 2 = ψt ψt 2 t t 2, t t 2 R. If t = t 2, we simply write ψt, t 2 = ψ t. Let us define that Q α is orthogonal matrix such that V + αd X = Q T α diag λ V + αd X, λ 2 V + αd X,..., λ n V + αd X Q α, and let us denote by D i the diagonal matrix that has one in its i, i position and all other components of D i equal to zero. It follows from [5,2,3] that

14 48 El Amir Djeffal and Lakhdar Djeffal d dα ψv + αd X = Q T α n ψ λ j V + αd X, λ k V + αd X D j Qα V + αd X Q T α Dk Q α. j,k= 25 Now by the choice of D i, it holds T r D j Q α V + αd X Q T α D k = 0, for j k. Thus it follows that d dα ψv + αd X = n ψ λ i V + αd X D i Qα V + αd X Q T α Di i= n = T r Q T α D i ψ λ i V + αd X D i Q α V + αd X i= = T r ψ V + αd X D X, and d dα ψv + αd S = Q T α n j,k= ψλ j V + αd S, λ k V + αd S D j Q α V + αd S Q T α D k Q α. 26 Now by the choice of D i, it holds T r D j Q α V + αd S Q T α D k for j k. Thus it follows that d dα ψv + αd S n = T r ψ λ i V + αd S D i Q α V + αd S Q T α D i i= n = T r Q T α D i ψ λ i V + αd S D i Q α V + αd S i= = T r ψ V + αd S D S. Now, we can write This gives f α = 2 T r ψ V + αd X D X + ψ V + αd S D S. f 0 = 2δ 2. = 0,

15 A path following IPMs for SDO problem based on new kernel function 49 Furthermore, d 2 T r ψ V + αd dα 2 X = T r d dα ψ V + αd X D X n = T r Q T α ψ λ j V + αd X, λ k V + αd X D j j,k= Q α V + αd X Q T α D k Q α D X = T r n j,k= ψ λ j V + αd X, λ k V + αd X D j Q α D X Q T α D k Q α D X Q T α n = T r ψ λ j V + αd X, λ k V + αd X Q α D X Q T α 2 jk j,k= { } ψ max λ j V + αd X, λ k V + αd X, j, k =,..., n D X 2. Also, we have d 2 T r ψ V + αd dα 2 S = T r d dα ψ V + αd S D S n = T r Q T α ψ λ j V + αd S, λ k V + αd S D j j,k= Q α V + αd S Q T α D k Q α D S = T r n j,k= ψ λ j V + αd S, λ k V + αd S D j Q α D S Q T α D k Q α D S Q T α n = T r ψ λ j V + αd S, λ k V + αd S Q α D S Q T α 2 jk j,k= { } ψ max λ j V + αd S, λ k V + αd S, j, k =,..., n D S 2. We denote by { } ψ ω = max λ j V + αd X, λ k V + αd X, j, k =,..., n,

16 50 El Amir Djeffal and Lakhdar Djeffal and { } ψ ω 2 = max λ j V + αd S, λ k V + αd S, j, k =,..., n. Therefore, d 2 f α = 2 dα T r ψ V + αd 2 X + ψ V + αd S ω D X 2 + ω 2 D S Lemma 7. Let δv be as defined in 5. Then we have Proof. Using 6, we have ΨV = T rψv = = δv m + ΨV. n ψλ i V i= n i= 4m + ΨV 2 = m + δv 2. 4m + ψ λ i V 2 This gives δv m + ΨV. Throughout the paper, we assume that τ. Using lemma 7 and the assumption that ΨV τ, we have δv m +. We have the following lemma. Lemma 8. Let f α be as defined in 24, δv be as defined in 5 and ψ the kernel function be as defined in 9. Then we have Proof. We have f α 2δ 2 ψ λ min V 2αδ. f α 2 max {ω, ω 2 } δ 2. It suffices to prove the following inequality: max {ω, ω 2 } ψ λ min V 2αδ. We can choose j, k {, 2,..., n} such that ψ λ j V + αd X ψ λ k V + αd X ω = λ j V + αd X λ k V + αd X.

17 A path following IPMs for SDO problem based on new kernel function 5 Using the Mean value theorem, there exists [ η min λ j V + αd X, λ k V + αd X, such that, max λ j V + αd X, λ k V + αd X ψ η = ψ λ j V + αd X, λ k V + αd X. Since D X are symmetric matrices, from the definition of δ and Frobenius norm, we have because, ψ η min {λ j V + αd X, λ k V + αd X } λ min V 2αδ, is monotonically decreasing, then we obtain ω ψ λ min V 2αδ. By the same method to get ω, we obtain ψ λ j V + αd S ψ λ k V + αd S ω 2 = λ j V + αd S λ k V + αd S. Using the Mean value theorem, there exists [ η min λ j V + αd S, λ k V + αd S, such that max λ j V + αd S, λ k V + αd S ψ η = ψ λ j V + αd S, λ k V + αd S. Since D S are symmetric matrices, from the definition of δ and Frobenius norm, we have because, ψ Therefore, This gives η min {λ j V + αd S, λ k V + αd S } λ min V 2αδ, is monotonically decreasing, then we obtain ω 2 ψ λ min V 2αδ. max {ω, ω 2 } ψ λ min V 2αδ. f α 2δ 2 ψ λ min V 2αδ. ], ],

18 52 El Amir Djeffal and Lakhdar Djeffal Since f 0 = 0 and f 0 = 2δV 2, we have fα f α := f 0 + f 0α + α ξ f 2 α := f 0 + f 0α + 2δ 2 Note that f 2 0 = 0. Furthermore, since 0 f ζdζdξ 0 α ξ 0 0 ψ λ min V 2ζδdζdξ. f 2α = 2δ 2 + δ ψ λ min V ψ λ min V 2αδ, then, we have f 2 0 = 2δ2, which is the same value of the f [ 0 ] and f 2 α = 2δ2 ψ λ min V 2αδ, which is increasing on α 0, λ minv 2δ. So we can rewrite f 2 α as follows: f 2 α = f f 20α + 2δ 2 α ξ 0 0 ψ λ min V 2ζδdζdξ. Now, using f 0 = f 2 0 and f α f 2 α, we can easily check that This gives that α f α = f 0 + f ξdξ f 2α. 0 f α 0, if f 2α 0. For each µ > 0, we compute a feasible iterate such that the proximity measure is decreasing. We want to compute the step size α which satisfies that f 2 α 0 holds with α as large as possible. Since f 2 α > 0, that is, f 2 α is monotonically increasing at α, the largest possible value at α satisfying f 2 α 0 occurs when f 2 α = 0, that is, ψ λ min V ψ λ min V 2αδ = 2δ. 28 Since ψ t is monotonically decreasing, the derivative of the left-hand side in 28 with respect to λ min V is

19 A path following IPMs for SDO problem based on new kernel function 53 ψ λ min V ψ λ min V 2αδ 0. So, the left-hand side in 28 is decreasing at λ min V. This implies that if λ min V gets smaller, then α gets smaller with fixed δ. Note that δ = n ψ λ i V 2 ψ λ min V ψ λ min V, i= and the equality is true if and only if λ min V is only the cordinate in λ V, λ 2 V,..., λ n V which is different from and λ min V <, that is, ψ λ min V < 0. Hence, the worst situation for the largest step size α occurs when λ min V satisfies ψ λ min V = δ. 29 In that case, the largest satisfying 28 is minimal. For our purpose, we need to deal with the worst case and so we assume that 29 holds. This implies By using 28 and 29 we immediately obtain λ min V = ρδ. 30 ψ λ min V 2αδ = 4δ. By the definition of ρ and using 30, the largest step size of the worst case is given as follows: α ρδ ρ2δ =. 3 2δ Lemma 9. Let the definition of ρ and α be as defined in 3, then we have α. m + m + 2 m+2 m+ Proof. Using lemma 4.4 in [], the definition of ψ t and 9, we have α ψ ρ2δ = 2m + + mm+ ρ2δ m+2 2m + + mm + 4δ+m m m+2 m+ 3m + m + 2δ m+2 m+, 2m + δ + 3mm + 2δ m+2 m+

20 54 El Amir Djeffal and Lakhdar Djeffal which completes the proof. For using α as the default step size in the algorithm, define the α as follows α = 3m + m + 2δ m+2 m Decrease of the proximity function during an inner iteration Now, we show that our proximity function Ψ with our default step size α is decreasing. It can be easily established by using the following result: Lemma 0. [2] Let ht be a twice differentiable convex function with h0 = 0, h 0 < 0 and let ht attain its global minimum at t > 0. If h t is increasing for t [0, t ], then ht = th 0. 2 Let the univariate function h be such that h0 = f 0 = 0, h 0 = f 0 = 2δ 2, h α = 2δ 2 ψ λ min V 2αδ. Since f 2 α satisfies the condition of the above lemma, fα f α f 2 α f α, for all 0 α α. We can obtain the upper bound for the decreasing value of the proximity in the inner iteration by the above lemma. Theorem. Let α be a step size as defined in 32 and δ = ΨV τ =. Then we have fα m + m 2 2m+ 3m + 2 ΨV m 2m+.

21 A path following IPMs for SDO problem based on new kernel function 55 Proof. For all α α, we have fα αδ 2 = δ 2 3m + m + 2δ m+2 m+ = 3m + m + 2 δ m m+ m m+ m + ΨV 3m + m + 2 m + m 2m+ 3m + m + 2 ΨV m 2m+ m + m 2 2m+ 3m + 2 ΨV m 2m Iteration bound We need to count how many inner iterations are required to return to the situation where ΨV τ after a µ update. We denote the value of ΨV after µ update as Ψ 0 the subsequent values in the same outer iteration are denoted as Ψ k, k =, 2,... If K denotes the total number of inner iterations in the outer iteration, then we have and according to 20, Ψ 0 L = On, θ, τ, Ψ K > τ, 0 Ψ K τ. Ψ k+ Ψ k m + m 2 2m+ 3m + 2 Ψ m 2m+ k. At this stage we invoke the following lemma from [2] Lemma. that [2] Let t 0, t,..., t k be a sequence of positive numbers such where β > 0, 0 < ν. Then t k+ t k βt ν k, k = 0,,..., K, Letting K tν 0 βν. t k = Ψ k, β = we can get the following lemma. m + m 2 2m+ 3m + 2 and ν = m + 2 2m +,

22 56 El Amir Djeffal and Lakhdar Djeffal Lemma 2. Let K be the total number of inner iterations in the outer iteration. Then we have K 6m + 3m+4 Proof. Using lemma, we have K Ψν 0 βν m+2 2m+ 2m+ Ψ0. = 6m + 3m+4 m+2 2m+ 2m+ Ψ0. Now we estimate the total number of iterations of our algorithm. Theorem 2. If τ, the total number of iterations is not more than 6m + 3m+4 m+2 2m+ 2m+ Ψ0 nµ0 log θ ε. Proof. In the algorithm, nµ ε, µ k = θ k µ 0 and µ 0 = T rx0 S 0 n. By simple computation, we have K θ log nµ0 ε. Therefore, the number of outer iterations is bounded above by nµ0 θ log ε. Multiplying the number of outer iterations by the number of inner iterations, we get an upper bound for the total number of iterations, namely, 6m + 3m+4 m+2 2m+ 2m+ Ψ0 nµ0 log θ ε. 5 Conclusion We propose a new barrier function and primal dual interior point algorithms for SDO problems and analyze the iteration complexity of the algorithm based on the kernel function. We have Om 3m+ 2m n m+ 2m log T rx0 S 0 ε for large-update methods and Om 3m+ 2m n log T rx 0 S 0 ε for small-update methods which are the best known iteration bounds for such methods. Future research might focus on the extension to symmetric cone optimization. Finally, the numerical test is an interesting work for investigating the behavior of the algorithms so as to be compared with other existing

23 A path following IPMs for SDO problem based on new kernel function 57 approaches. Acknowledgements The authors thank the referees for their careful reading and their precious comments. Their help is much appreciated. References [] Y.Q. Bai, M El. Ghami and C. Roos, A comparative study of kernel functions for primal dual interior-point algorithms in linear optimization, SIAM J. Optim [2] Y.Q. Bai, M. El Ghami and C. Roos, A new efficient large-update primal dual interior point method based a finite barrier, SIAM J. Optim [3] R. Bellman, Introduction to Matrix Analysis, in: Classics in Applied Mathematics, SIAM, Philadelphia, 995. [4] E. de Klerk, Aspects of Semidefinite Programming: Interior Point Methods and Selected Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, [5] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, New York, 99. [6] M. Kojima, N. Megiddo and S. Mizuno, A primal dual infeasibleinterior-point algorithm for linear programming, Math. Program [7] M. Kojima, M. Shida and S. Hara, Interior point methods for the monotone semidefinite linear complementarity problem in symmetric matrices, SIAM J. Optim [8] I.J. Lustig, Feasible issues in a primal dual interior point method for linear programming, Math. Program [9] H. Mansouri and C. Roos, A new full-newton step On infeasible interior-point algorithm for semidefinite optimization, Numer. Algorithms

24 58 El Amir Djeffal and Lakhdar Djeffal [0] H. Mansouri and C. Roos, Simplified OnL infeasible interior-point algorithm for linear optimization using full Newton steps, Optim. Methods Softw [] Y.E. Nesterov and A.S. Nemirovskii, Interior Point Polynomial Algorithms in Convex Programming, SIAM Stud. Appl. Math. SIAM, Philadelphia, 994. [2] J. Peng, C. Roos and T. Terlaky, Self-regular functions and new search directions for linear and semidefinite optimization, Math. Program [3] J. Peng, C. Roos and T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Princeton University Press, Princeton, NJ, [4] C. Roos, A full-newton step On infeasible interior-point algorithm for linear optimization, SIAM J. Optim [5] J. Sturm, Theory and algorithms of semidefinite programming, in: H. Frenk, C. Roos, T. Terlaky, S. Zhang Eds., High Performance Optimization, Kluwer Academic Publishers, Dordrecht, The Netherlands, 999. [6] K. Tanabe, Centered Newton method for linear programming: interior and exterior point method Japanese, in: New Methods for Linear Programming 3, ed. K. Tone, Inst. Statist. Math. Tokyo, Japan, 990. [7] L. Tuncel, Potential reduction and primal dual methods, in: H. Wolkowicz, R. Saigal, L. Vandenberghe Eds., Theory, Algorithms and Applications, in: Handbook of Semidefinite Programming, Kluwer Academic Publishers, Boston, MA, [8] Y. Zhang, On the convergence of a class of infeasible-interior-point methods for the horizontal linear complementarity problem, SIAM J. Optim

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming

Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming Applied Mathematical Sciences, Vol 6, 22, no 4, 229-234 Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming El Amir Djeffal Departement of Mathematics University of Batna,

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications

w Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION

CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras Optimization Methods and Software Vol. 00, No. 00, January 2009, 1 14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowda a, J. Tao

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

Constructing self-concordant barriers for convex cones

Constructing self-concordant barriers for convex cones CORE DISCUSSION PAPER 2006/30 Constructing self-concordant barriers for convex cones Yu. Nesterov March 21, 2006 Abstract In this paper we develop a technique for constructing self-concordant barriers

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES

PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES PRIMAL-DUAL INTERIOR-POINT METHODS FOR SELF-SCALED CONES YU. E. NESTEROV AND M. J. TODD Abstract. In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Convex Optimization : Conic Versus Functional Form

Convex Optimization : Conic Versus Functional Form Convex Optimization : Conic Versus Functional Form Erling D. Andersen MOSEK ApS, Fruebjergvej 3, Box 16, DK 2100 Copenhagen, Blog: http://erlingdandersen.blogspot.com Linkedin: http://dk.linkedin.com/in/edandersen

More information