New Primal-dual Interior-point Methods Based on Kernel Functions

Size: px
Start display at page:

Download "New Primal-dual Interior-point Methods Based on Kernel Functions"

Transcription

1 New Primal-dual Interior-point Methods Based on Kernel Functions

2

3 New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus Prof. dr. ir. J. Fokkema, voorzitter van het College voor Promoties, in het openbaar te verdedigen op dinsdag 5 Oktober 005 om 5.30 uur door Mohamed EL GHAMI Certificat d Etudes approfondies Mathematiques C.E.A) Universite Mohammed V, Rabat, Marokko geboren te Tamsamane, Marokko.

4 Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. C. Roos Toegevoegd promotor: Dr. J.B.M. Melissen Samenstelling promotiecommissie: Rector Magnificus voorzitter Prof. dr. ir. C. Roos Technische Universiteit Delft, promotor Dr. J.B.M. Melissen Technische Universiteit Delft, toegevoegd promotor Prof. dr. G. J. Olsder Technische Universiteit Delft Prof. dr. ir. D. den Hertog Tilburg University, Netherlands Prof. dr. T. Terlaky Mc-Master University, Hamilton, Canada Prof. dr. Y. Nesterov University Catholique de Louvain-la-Neuve, Belgium Prof. dr. F. Glineur University Catholique de Louvain-la-Neuve, Belgium Prof. dr. ir. P. Wesseling Technische Universiteit Delft, reservelid Dit proefschrift kwam tot stand onder auspicien van: Copyright c 005 by M. El Ghami All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior permission of the author. ISBN: Author m.elghami@ewi.tudelft.nl

5 To the memory of my grandmother and grandfathers

6

7 Acknowledgements I am very grateful to my supervisors. Firstly, I would like to acknowledge Prof.dr.ir. C. Roos for his invaluable contributions and support during the period that this thesis was written, and Dr. J.B.M. Melissen for his help and discussion during the last two years of my Phd. I would also like to express my gratitude to Prof. Z.A. Guennoun, for his encouragement and authorization that has led to my research and stay in The Netherlands. There are three former members of the group of Algorithms who I would specifically like to thank for the support they have given me over the years: Y. Bai, F.D. Barb and E. de Klerk. Thanks are also due to all other members of our group. A special acknowledgement for: Ivan Ivanov is in place because of his help with the implementation of the algorithm presented in this thesis. My research was financially supported by the Dutch Organization for Scientific Research NWO grant ), wich is kindly acknowledged. Moving towards more personal acknowledgements, I would like to express aggregated thanks towards all my members of my family and friends I wish to thank in particular Ahmed Agharbi for his friendship); Said Hamdioui for changing my life from worse to bad); Hassan Maatok he know way). Finally, I wish to thank my parents A. El Ghami and F. Bouri and my sister H. El Ghami, my brothers Omar and Makki and my wife S. Belgami, who gave me enduring support during all those years, and I want also to honor my daughter Dounia in these acknowledgements. You know who you are. i

8 ii ACKNOWLEDGEMENTS

9 Contents Acknowledgements List of Figures List of Tables i v vii Introduction. Introduction A short history of Linear Optimization Primal-dual interior point methods for LO Primal-dual interior point methods based an kernel functions 6.4 The scope of this thesis Primal-Dual IPMs for LO Based on Kernel Functions 3. Introduction A new class of kernel functions Properties of kernel functions Ten kernel functions Algorithm Upper bound for Ψv) after each outer iteration Decrease of the barrier function during an inner iteration Bound on δv) in terms of Ψv) Complexity Application to the ten kernel functions Some technical lemmas Analysis of the ten kernel functions Summary of results iii

10 iv CONTENTS.6 A kernel function with finite barrier term Properties Fixing the value of σ Lower bound for δv) in terms of Ψv) Decrease of the proximity during a damped) Newton step Complexity Primal-Dual IPMs for SDO based on kernel functions Introduction Classical search direction Nesterov-Todd direction New search direction Properties of ΨV ) and δv ) Analysis of the algorithm Decrease of the barrier function during a damped) Newton step Iteration bounds Application to the ten kernel functions Numerical results 83 5 Conclusions 9 5. Conclusions and Remarks Directions for further research A Technical Lemmas 95 A. Three technical lemmas B The Netlib-Standard Problems 97 Summary 07 Samenvatting 09 Curriculum Vitae Index

11 List of Figures. The algorithm Performance of a large-update IPM θ = 0.99) Performance of a small-update IPM θ = n ) Three different kernel functions Scheme for analyzing a kernel-function-based algorithm Figure of ψ, and ψ Generic primal-dual interior-point algorithm for SDO v

12 vi LIST OF FIGURES

13 List of Tables. The conditions..3-a)-..3-d) are logically independent Ten kernel functions First two derivatives of the ten kernel functions The conditions..3-a) and..3-b) The conditions..3-c) and..3-e) The condition..3-d) Use of conditions..3-a)-..3-e) Justification of the validity of the scheme in Figure Complexity results for small-update methods Complexity results for large-update methods Choices for the scaling matrix P Complexity results for large- and small-update methods for SDO Choice of parameters Iteration numbers for ψ, ψ, and ψ Iteration numbers for ψ 4, ψ 5, ψ 6 and ψ Iteration numbers for ψ 8 and ψ Iteration numbers for ψ Iteration numbers for some finite barrier functions Smallest iteration numbers and corresponding kernel functions) Iteration numbers for the five best kernel functions I) Iteration numbers for the five best kernel functions II) Iteration numbers for the five best kernel functions III) B. The Netlib-Standard Problems I) B. The Netlib-Standard Problems II) B.3 The Netlib-Standard Problems III) vii

14 viii LIST OF TABLES

15 Chapter Introduction. Introduction The study of Interior-Point Methods IPMs) is currently one of the most active research areas in optimization. The name interior-point methods originates from the fact that the points generated by an IPM lie in the interior of the feasible region. This is in contrast with the famous and well-established simplex method where the iterates move along the boundary of the feasible region from one extreme point to another. Nowadays, IPMs for Linear Optimization LO) have become quite mature in theory, and have been applied to practical LO problems with extraordinary success. In this chapter, a short survey of the fields of linear optimization and interior point methods is presented. Based on the simple model of standard linear optimization problems, some basic concepts of interior point methods and various strategies used in the algorithm are introduced. The scope of this thesis follows at the end of the chapter.. A short history of Linear Optimization Linear optimization is one of the most widely applied mathematical techniques. The last 5 years gave rise to revolutionary developments, both in computer technology and in algorithms for LO. As a consequence, LO-problems that 5 years ago required a computational time of one year, can now be solved within a couple of minutes. The achieved acceleration is due partly to advances in computer technology but significant part also to the new IPMs for LO. This section is based on a historical review in [Roo0].

16 INTRODUCTION. During the 940 s it became clear that an effective computational method was required to solve the many linear optimization problems that originated from logistical questions that had to be solved during World War II. The first practical method for solving LO-problems was the simplex method, proposed by Dantzig [Dan63], in 947. This algorithm explicitly explores the combinatorial structure of the feasible region to locate a solution by moving from a vertex of the feasible set to an adjacent vertex while improving the value of the objective function. Since then, the method has been routinely used to solve problems in business, logistics, economics, and engineering. In an effort to explain the remarkable efficiency of the simplex method, using the theory of complexity, one has tried very hard to prove that the computational effort to solve an LOproblem via the simplex method is polynomially bounded in terms of the size of a problem instance. Klee and Minty [Kle7], have shown in 97 that in the process of solving the problem maximize n 0 n j x j j= i s.t. 0 i j x j + x i 00 i, i, j =,..., n),..) j= x j 0. the simplex method goes through n vertices. This shows that the worst-case behavior of the simplex method is exponential. The first polynomial method for solving LO problems was proposed by Khachiyan, in 979. It is the so-called ellipsoid method [Kha79]. It is based on the ellipsoid technique for nonlinear optimization developed by Shor [Sho87]. With this technique, Khachiyan proved that LO belongs to the class of polynomially solvable problems. Although this result had a great theoretical impact, it failed to keep up its promises in actual computational efficiency. A second proposal was made in 984 by Karmarkar [Kar84]. Karmarkar s algorithm is also polynomial, with a better complexity bound than Khachiyan s, but it has the further advantage of being highly efficient in practice. After an initial controversy it has been established that for very large, sparse problems, subsequent variants of Karmarkar s method often outperform the simplex method. Though the field of LO was then considered more or less mature, after Karmarkar s paper it suddenly surfaced as one of the most active areas of research in optimization. In the period more than 300 papers were published on the subject. Originally, the aim of the research was to get a better understanding of the so-called projective method of Karmarkar. Soon it became apparent that Recently in [Dez04a; Dez04b] the authors prove that by adding an exponential number of redundant inequalities, the central path-following interior point methods visits small neighborhoods of all the vertices of the Klee-Minty cube.

17 .3 PRIMAL-DUAL INTERIOR POINT METHODS FOR LO 3 this method was related to classical methods like the affine scaling method of Dikin [Dik67; Dik74; Dik88], the logarithmic barrier method of Frisch [Fri55; Fri56], and the center method of Huard [Hua67], and that the last two methods, when tuned properly, could also be proved to be polynomial. Moreover, it turned out that the IPM-approach to LO has a natural generalization to the related field of convex nonlinear optimization, which resulted in a new stream of research and an excellent monograph of Nesterov and Nemirovski [Nes93]. This monograph opened the way into other new subfields of optimization, like semidefinite optimization and second order cone optimization, with important applications in system theory, discrete optimization, and many other areas. For a survey of these developments the reader may consult Vandenberghe and Boyd [Boy96], and the book of Ben-Tal and Nemirovski [BT0]..3 Primal-dual interior point methods for LO In this section we proceed by describing primal-dual interior point methods for LO and some recent results [Pen0; Pen0a; Pen0b]. There are many different ways to represent an problem. The two most popular and widely used representations are the standard and the canonical 3 forms. It is well known [Gol89] that any LO problem can be converted into standard or canonical form. In this thesis we consider the standard linear optimization problem P ) min { c T x : Ax = b, x 0 }, where A R m n is a real m n matrix of rank m, and x, c R n, b R m. The dual problem of P ) is given by D) max { b T y : A T y + s = c, s 0 }, with y R m and s R n. The two problems P ) and D) share the matrix A and the vectors b and c in their description. But the role of b and c has been interchanged: the objective vector c of P ) is the right-hand side vector of D), and, similarly, the right-hand side vector b of P ) is the objective vector of D). Moreover, the constraint matrix in D) is the transposed matrix A T, where A is the constraint matrix in P ). It is well known [Roo05], that finding an optimal solution of P ) and D) is equivalent to solving the non-linear system of equations Ax = b, x 0, A T y + s = c, s 0,.3.) xs = 0. 3 In the canonical form of the LO problem all constraints are inequality constraints, e.g., min { c T x : Ax b, x 0 }.

18 4 INTRODUCTION.3 The first equation requires that x is feasible for P ), and the second equation that the pair y, s) is feasible for D), whereas the third equation is the so-called complementarity condition for P ) and D); here xs denotes the coordinatewise product of the vectors x and s, i.e. We shall also use the notation xs = [x s ; x s ;... ; x n s n ]. x s = [ x ; x ;... ; x ] n, s s s n for each vector x and s such that s i 0, for all i n. For an arbitrary function f : R R, and an arbitrary vector x we will use the notation fx) = [fx ); fx );... ; fx n )]. The basic idea underlying primal-dual IPMs is to replace the third non-linear) equation in.3.) by the nonlinear equation xs = µ, with parameter µ > 0 and with denoting the all-one vector ; ;... ; ). The system.3.) now becomes: Ax = b, x 0, A T y + s = c, s 0,.3.) xs = µ. Note that if x and s solve this system then these vectors are necessarily positive. Therefore, in order for.3.) to be solvable there needs to exist a triple x 0, y 0, s 0 ) such that Ax 0 = b, x 0 > 0, A T y 0 + s 0 = c, s 0 > ) We assume throughout that both P) and D) satisfy this condition, which is known as the interior-point condition IPC). For this and some of the properties mentioned below, see, e.g., [Roo05]. Satisfaction of the IPC can be assumed without loss of generality. In fact we may, and will, even assume that x 0 = s 0 = [Roo05]. From.3.3) we observe that these x 0 and s 0, for some appropriate y 0, solve.3.) when µ =. If the IP C holds, the parameterized system.3.) has a unique solution xµ), yµ), sµ)) for each µ > 0; xµ) is called the µ-center of P ) and yµ), sµ)) is the µ-center of D). The set of µ-centers with µ > 0) defines a homotopy path, which is called the central path of P ) and D) [Meg89; Son86]. If µ 0 then the limit of the central path exists. This limit satisfies the complementarity condition, and hence yields optimal solutions for P ) and D) [Roo05]. IPMs follow the central path approximately. Let us briefly indicate how this works. Without loss of generality we assume that xµ), yµ), sµ)) is known for some positive µ. For example, due to the above choice, we may assume this to be

19 .3 PRIMAL-DUAL INTERIOR POINT METHODS FOR LO 5 the case for µ =, with x) = s) =. We then decrease µ to µ + := θ)µ, for some θ 0, ) and apply Newton s method to iteratively solve the non-linear equations.3.). So for each step we have to solve the following Newton system. A x = 0, A T y + s = 0,.3.4) s x + x s = µ + xs. Because A has full row rank, the system.3.4) uniquely defines a search direction x, s, y) for any x > 0 and s > 0; this is the so-called Newton direction and this direction is used in all existing implementations of the primal-dual method. The first two equations take care of primal and dual feasibility after a small enough) step along the Newton direction, whereas the third equation serves to drive the new iterates to the µ + centers. The third equation is called the centering equation. By taking a step along the search direction, with the step size defined by a line search rule, one constructs a new triple x, y, s), with x > 0 and s > 0. If necessary, we repeat the procedure until we find iterates that are close enough to xµ), yµ), sµ)). Then µ is again reduced by the factor θ and we apply Newton s method targeting at the new µ centers, and so on. This process is repeated until µ is small enough, say until nµ ɛ; at this stage we have found ɛ solutions of the problems P ) and D). In this thesis we follow [Bai04a; Pen00a; Pen00b; Pen0; Roo05; Ye97] and reformulate this approach by defining the same search direction in a different way. To make this clear we associate to any triple x, s, µ), with x > 0 and s > 0 and µ > 0, the vector xs v := µ..3.5) Note that if x is primal feasible and s is dual feasible then the pair x, s) coincides with the µ-center xµ), sµ)) if and only if v =. Introducing the notations Ā := µ AV X = AS V,.3.6) V := diagv), X := diagx), S := diags),.3.7) and defining the scaled search directions d x and d s according to the system.3.4), can be rewritten as d x := v x x, d s := v s s,.3.8) Ād x = 0, Ā T y + d s = 0, d x + d s = v v..3.9)

20 6 INTRODUCTION.3 Note that d x and d s are orthogonal vectors, since d x belongs to the null space of the matrix Ā and d s to its row space. Hence, we will have d x = d s = 0 if and only if v v = 0, which is equivalent to v =. We conclude that d x = d s = 0 holds if and only if the pair x, s) coincides with the µ-center xµ), sµ)). We make another crucial observation. The third equation in.3.9) is called the scaled centering equation. The right-hand side v v in the scaled centering equation equals minus the gradient of the function Ψ c v) := n v i i= log v i )..3.0) Not that Ψ c v) = diag + v ) and that this matrix is positive definite, so Ψ c v) is strictly convex. Moreover, since Ψ c ) = 0, it follows that Ψ c v) attains its minimal value at v =, with Ψ c ) = 0. Thus it follows that Ψ c v) is nonnegative everywhere and vanishes if and only if v =, i.e., if and only if x = xµ) and s = sµ). The µ-centers xµ) and sµ) can therefore be characterized as the minimizers of Ψ c v)..3. Primal-dual interior point methods based an kernel functions Now we are ready to describe the idea underlying the approach in this thesis. In the scaled centering equation, the last equation of.3.9), we replace the scaled barrier function Ψ c v) by an arbitrary strictly convex function Ψv), v R n ++ such that Ψv) is minimal at v = and Ψ) = 0, where R n ++ denote a positive orthant. Thus the new scaled centering equation becomes d x + d s = Ψv)..3.) As before, we will have d x = 0 and d s = 0 if and only if v =, i.e., if and only if x = xµ) and s = sµ), as it should be. To simplify matters we restrict ourselves to the case where Ψv) is separable with identical coordinate functions. Thus, letting ψ denote the function on the coordinates, we write n Ψv) = ψv i ),.3.) i= where ψt) : D R +, with R ++ D, is strictly convex and minimal at t =, with ψ) = 0. In the present context we call the univariate function ψt) the kernel function of Ψv). We will always assume that the kernel function is twice differentiable. Observe that ψ c t), given by ψ c t) := t log t, t > 0,.3.3)

21 .3 PRIMAL-DUAL INTERIOR POINT METHODS FOR LO 7 is the kernel function yielding the Newton direction, as defined by.3.9). In this general framework we call Ψv) a scaled barrier function. An unscaled barrier function, whose domain is the x, s, µ)-space, can be obtained via the definition n n ) xi s i Φx, s, µ) = Ψv) = ψ v i ) = ψ..3.4) µ i= One may easily verify that by application of this definition to the kernel function in.3.3) we obtain up to a constant factor and a constant term the classical logarithmic barrier function. Any proximity function Ψv) gives rise to a primal-dual IPM, as described below in Figure.. With Ā as defined in.3.7), the search direction in the algorithm is obtained by solving the system i= Ād x = 0, Ā T y + d s = 0, d x + d s = Ψv),.3.5) for d x, y and d s, and then computing x and s from according to.3.8). x = xd x v, s = sd s v,.3.6) The inner while loop in the algorithm is called inner iteration and the outer while loop outer iteration. So each outer iteration consists of an update of the barrier parameter and a sequence of one or more inner iterations. It is generally agreed that the total number of inner iterations required by the algorithm is an appropriate measure for the efficiency of the algorithm. This number will be referred to as the iteration complexity of the algorithm. Usually the iteration complexity is described as a function of the dimension n of the problem and the accuracy parameter ɛ. A crucial question is, of course, how to choose the parameters that control the algorithm, i.e., the proximity function Ψv), the threshold parameter τ, the barrier update parameter θ, and the step size α, so as to minimize the iteration complexity. In practice one distinguishes between large-update methods [Ans9; Gon9; Gon9; Her9; Jan94a; Koj93a; Koj93b; Tod96; Roo89], with θ = Θ), and small-update methods, with θ = Θ/ n) [And96; Her94; Tod89]. Figures. and.3 exhibit the behavior of IPMs with large-update and smallupdate for a specific two-dimensional LO problem. These figures are drawn in xs-space. Note that in the xs-space the central path is represented by the straight line consisting of all vectors µe, µ > 0. In these figures we have drawn the iterates for a simple problem and also the level curves for ψv) = around the target points on the central path that are used during the algorithm.

22 8 INTRODUCTION.3 Generic Primal-Dual Algorithm for LO Input: A kernnel function ψt); a threshold parameter τ > 0; an accuracy parameter ɛ > 0; a fixed barrier update parameter θ, 0 < θ < ; begin x := ; s := ; µ := ; while nµ > ɛ do begin µ := θ)µ; v := xs µ ; while Ψv) > τ do begin x := x + α x; s := s + α s; y := y + α y; v := xs µ ; end end end Figure.: The algorithm. Until recently, only algorithms based on the logarithmic barrier function were considered. In this case, where the proximity function is the scaled logarithmic barrier function, as given by.3.0), the algorithm has been well investigated see, e.g., [Gon9; Her94; Jan94b; Koj89; Mon89; Tod89]). The corresponding complexity results can be summarized as follows. Theorem.3. cf. [Roo05]). If the kernel function is given by.3.3) and τ = O), then the algorithm requires n n ) O log.3.7) ɛ

23 .3 PRIMAL-DUAL INTERIOR POINT METHODS FOR LO µ + = θ)µ µ xs x s x + s central path x s Figure.: Performance of a large-update IPM θ = 0.99). inner iterations if θ = Θ n ), and O n log n ) ɛ inner iterations if θ = Θ). The output is a positive feasible pair x, s) such that nµ ɛ and Ψv) = O). As Theorem.3. makes clear, small-update methods theoretically have the best iteration complexity. Despite this, large-update methods are in practice much more efficient than small-update methods [And96]. This has been called the irony of IPMs [Ren0]. In fact, the observed iteration complexity of large-update methods is about O ) log n log n ɛ in practice. This unpleasant gap between theory and practice has motivated many researchers to search for variants of large-update methods whose theoretical iteration complexity comes closer to what is observed in practice. As pointed out below, some progress has recently been made in this respect but, regrettably, it has to be admitted that we are still far from the desired result. We proceed by describing some recent results. Note that if ψt) is a kernel function then ψ) = ψ ) = 0, and hence ψt) is completely determined by its

24 0 INTRODUCTION x s x s Figure.3: Performance of a small-update IPM θ = n ). second derivative: ψt) = t ξ ψ ζ) dζdξ..3.8) In [Pen0a] the iteration complexity for large-update methods was improved to n n ) O log n) log,.3.9) ɛ which is currently the best result for such methods. This result was obtained by considering kernel functions that satisfy ψ t) = Θt p + t q ), t 0, )..3.0) The analysis of an algorithm based on such a kernel function is greatly simplified if the kernel function also satisfies the following property: ψ t t ) [ψ t ) + ψ t )], t, t > 0..3.) The latter property has been given the name of exponential convexity or shortly e-convexity) [Bai03a; Pen0]. In [Pen0a] kernel functions satisfying.3.0) and.3.) were named self-regular. The best iteration complexity for large-update

25 .4 THE SCOPE OF THIS THESIS methods based on self-regular kernel functions is as given by.3.9) [Pen0a]. Subsequently, the same iteration complexity was obtained in [Pen0] in a more simple way for the specific self-regular function ψt) = t.4 The scope of this thesis + t q q, q = log n. In this thesis we further explore the idea of IPMs based on kernel functions as described before. In Chapter we present a new class of barrier functions which are not necessary self-regular. This chapter is based on [Bai04a; Bai03a; Bai03b; Bai0b; Gha04b; Gha04a; Gha05a]. The proposed class is defined by some simple conditions on the kernel function and its first three derivatives. The best iteration bound for smalland large-update methods as given by.3.7) and.3.9) respectively are also achieved for kernel functions in this class. In Chapter 3 we investigate the extension of primal-dual IPMs based on kernel functions studied in Chapter to semidefinite optimization SDO). The chapter is based on [Gha05b]. In Chapter 4 we report some numerical experiments. The aim of this section is to investigate the computational performance of IPMs based on various kernel functions. These tests indicate that the computational efficiency of an algorithm highly depends on the kernel function underlying the algorithm. Finally, Chapter 5 contains some conclusions and recommendations for further research.

26 INTRODUCTION.4

27 Chapter Primal-Dual IPMs for LO Based on Kernel Functions. Introduction As pointed out in Chapter, Peng, Roos, and Terlaky [Pen00a; Pen00b; Pen0; Pen0a; Pen0b] recently, introduced so-called self-regular barrier functions for primal-dual interior point methods IPMs) for linear optimization. Each such barrier function is determined by its univariate self-regular kernel function. In this chapter we present a new class of barrier functions. The proposed class is defined by some simple conditions on the kernel function and its first three derivatives. As we will show, the currently best known bounds for both smalland large-update primal-dual IPMs are achieved by functions in the new class.. A new class of kernel functions We call ψ : 0, ) [0, ) a kernel function if ψ is twice differentiable and the following conditions are satisfied. i) ψ ) = ψ) = 0; ii) ψ t) > 0, for all t > 0. In this chapter we restrict our selves to functions that are coercive, i.e., iii) lim t 0 ψt) = lim t ψt) =. 3

28 4 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS. Clearly, i) and ii) say that ψt) is a nonnegative strictly convex function such that ψ) = 0. Recall from.3.8) that this implies that ψt) is completely determined by its second derivative: ψt) = t ξ ψ ζ) dζdξ...) Moreover, by iii), ψt) has the so called barrier property. Having such a function ψt), its definition is extended to positive n-dimensional vectors v by.3.), thus yielding the induced scaled) barrier function Ψv). The barrier function induces primal-dual barrier search directions, by using.3.) as the centering equation. In the sequel we also use the norm-based proximity measure δv) defined by Note that δv) = Ψ v) = d x + d s...) Ψ v) = 0 δ v) = 0 v = e. In this chapter we consider more conditions on the kernel function, namely ψ C 3 and tψ t) + ψ t) > 0, t <,..3-a) tψ t) ψ t) > 0, t >,..3-b) ψ t) < 0, t > 0,..3-c) ψ t) ψ t)ψ t) > 0, t <,..3-d) ψ t)ψ βt) βψ t)ψ βt) > 0, t >, β >...3-e) Condition..3-a) is obviously satisfied if t, since then ψ t) 0. Similarly, condition..3-b) is satisfied if t, since then ψ t) 0. Also..3-d) is satisfied if t since then ψ t) 0, whereas ψ t) < 0. We conclude that conditions..3-a) and..3-d) are conditions on the barrier behavior of ψt). On the other hand, condition..3-b) deals only with t and hence concerns the growth behavior of ψt). Condition..3-e) is technically more involved; we will discuss it later. Remark... It is worth pointing out that the conditions..3-a)-..3-d) are logically independent. Table. shows five kernel functions and the signs indicate whether a condition is satisfied +) or not ). The next two lemmas make clear that conditions..3-a) and..3-b) admit a nice interpretation. Lemma.. Lemma.. in [Pen0b]). The following three properties are equivalent:

29 . A NEW CLASS OF KERNEL FUNCTIONS 5 ψt)..3-a)..3-b)..3-c)..3-d)..3-e) t + e σt ) σ, σ t log t t 3 + t t t + + t 4 log t t + ) t ) log t Table.: The conditions..3-a)-..3-d) are logically independent. i) ψ t t ) ψ t ) + ψ t )), for all t, t > 0; ii) ψ t) + tψ t) 0, t > 0; iii) ψ e ξ) is convex. Proof. iii) i): From the definition of convexity, we know that ψexpζ)) is convex if and only if for any ζ, ζ R, the following inequality holds )) ψ exp ζ + ζ ) ψexp ζ )) + ψexp ζ ))). Letting t = exp ζ ), t = exp ζ ), obviously one has t, t 0, + ), and the above relation can be rewritten as ψ t t ) ψt ) + ψt )). iii) ii): The function ψexp ζ)) is convex if and only if the second derivative with respect to ζ is nonnegative. This gives exp ζ) ψ exp ζ)) + exp ζ) ψ exp ζ)) 0. Substituting t = exp ζ), one gets tψ t) + t ψ t) 0 which is equivalent to ψ t) + tψ t) 0 for t > 0. This completes the proof of the lemma. Lemma..3. Let ψt) be a twice differentiable function for t > 0. Then the following three properties are equivalent: ) i) ψ ψ t ) + ψ t )), for t, t > 0; t +t

30 6 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS. ii) tψ t) ψ t) 0, t > 0; iii) ψ ξ ) is convex. Proof. iii) i): We know that ψ ξ ) is convex if and only if for any ξ, ξ R +, the following inequality holds: ) ψ ξ + ξ ) ) )) ψ ξ + ψ ξ. By letting t = ξ, t = ξ, the above relation can be equivalently rewritten as ) t ψ + t ψ t ) + ψ t )). iii) ii): The second derivative of ψ ξ ) is nonnegative if and only if ξψ ξ) ψ ξ )) 0. Substituting t = ξ gives 4ξ 3 4t tψ t) ψ t)) 3 0, which is equivalent to tψ t) ψ t) 0, for t > 0. Following [Bai03a], we call the property described in Lemma.. exponential convexity, or shortly e-convexity. This property will turn out to be very useful in the analysis of primal-dual algorithms based on kernel functions. In the next lemma we show that if ψt) satisfies..3-b) and..3-c), then ψt) also satisfies condition..3-e). Lemma..4. If ψt) satisfies..3-b) and..3-c), then ψt) satisfies..3-e). Proof. For t > we consider Note that f) = 0. Moreover, fβ) := ψ t)ψ βt) βψ t)ψ βt), β. f β) = tψ t)ψ βt) ψ t)ψ βt) βtψ t)ψ βt) = ψ βt) tψ t) ψ t)) βtψ t)ψ βt) > 0. The last inequality follows since ψ βt) > 0, tψ t) ψ t) > 0, by..3-b), and βtψ t)ψ βt) > 0, since t >, which implies ψ t) > 0, and ψ βt) < 0, by..3-c). Thus it follows that fβ) > 0 for β >, proving the lemma. As a preparation for later, we present in the next section some technical results for the new class of kernel functions.

31 . A NEW CLASS OF KERNEL FUNCTIONS 7.. Properties of kernel functions Lemma..5. One has tψ t) ψt), if t. Proof. Defining gt) := tψ t) ψt) one has g) = 0 and g t) = tψ t) 0. Hence gt) 0 for t and the lemma follows. Lemma..6. If ψ is a kernel function that satisfies..3-c), then ψt) > t ) ψ t) and ψ t) > t ) ψ t), if t >, ψt) < t ) ψ t) and ψ t) > t ) ψ t), if t <. Proof. Consider the function ft) = ψt) t ) ψ t). Then f) = 0 and f t) = ψ t) t ) ψ t). Hence f ) = 0 and f t) = t ) ψ t). Using that ψ t) < 0 it follows that if t > then f t) > 0, whence f t) > 0 and ft) > 0, and if t < then f t) < 0, so f t) > 0 and ft) < 0. From this the lemma follows. Lemma..7. If ψt) satisfies..3-c), then ψ t) t ) < ψt) < ψ ) t ), t >, ψ ) t ) < ψt) < ψ t) t ), t <. Proof. Using Taylor s theorem and ψ) = ψ ) = 0, we obtain ψt) = ψ )t ) + 3! ψ ξ)t ) 3, ξ > 0. Since ψ t) < 0 the second inequality for t > and the first inequality for t < in the lemma follows. The remaining two inequalities are an immediate consequence of Lemma..6. Lemma..8. Suppose that ψt ) = ψt ), with t t and β. Then ψβt ) ψβt ). Equality holds if and only if β = or t = t =.

32 8 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS. Proof. Consider One has f) = 0 and fβ) := ψβt ) ψβt ). f β) = t ψ βt ) t ψ βt ). Since ψ t) 0 for all t > 0, ψ t) is monotonically non-decreasing. ψ βt ) ψ βt ). Substitution gives Hence f β) = t ψ βt ) t ψ βt ) t ψ βt ) t ψ βt ) = ψ βt ) t t ) 0. The last inequality holds since t t, and ψ t) 0 for t. This proves that fβ) 0 for β, and hence the inequality in the lemma follows. If β = then we obviously have equality. Otherwise, if β >, and fβ) = 0, then the mean value theorem implies f ξ) = 0 for some ξ, β). But this implies ψ ξt ) = ψ ξt ). Since ψ t) is strictly monotonic, this implies ξt = ξt, whence t = t. Since also t t, we obtain t = t =. Lemma..9. Suppose that ψt ) = ψt ), with t t. Then ψ t ) 0 and ψ t ) 0, whereas ψ t ) ψ t ). Proof. The lemma is obvious if t = or if t =, because then ψt ) = ψt ) = 0 implies t = t =. We may therefore assume that t < < t. Since ψt ) = ψt ), Lemma..7 implies: t ) ψ ) < ψt ) = ψt ) < t ) ψ ). Hence, since ψ ) > 0, it follows that t > t. Using this and Lemma..6, while assuming ψ t ) < ψ t ), we may write ψt ) > t ) ψ t ) > t ) ψ t ) > t ) ψ t ) = t ) ψ t ) > ψt ). This contradiction proves the lemma.

33 . A NEW CLASS OF KERNEL FUNCTIONS 9.. Ten kernel functions By way of example we consider in this thesis ten kernel functions, as listed in Table.. Note that some of these kernel functions depend on a parameter e.g., ψ t) depends on the parameter q > ), and hence when the parameter is not specified, it represents a whole class of kernel functions. i kernel functions ψ i t + t q qq ) t log t q t ), q > q t + e ) e e e t e ) t t t + e t t t e ξ dξ t + t q q, q > 8 t + t q q, q > 9 0 t +p +p log t, p [0, ] t p+ + t q p+ q, p [0, ], q > Table.: Ten kernel functions. The first proximity function, ψ t), gives rise to the classical primal-dual logarithmic barrier function and is a special case of ψ 9 t), for p =. The second kernel function ψ is the special case of the prototype self-regular kernel function [Pen0b], Υ p,q t) = t+p + p + t q+ q q ) q t ), p, q,..4) q

34 0 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS. i ψ i ψ i t t + t t t q q 3 t et e ) ee t ) + t q + e ) et e t +) ee t ) 3 4 t t t 4 5 t e t + +t e t t 4 t 6 t e t + e t 7 t t q + qt q 8 t q qt q 9 t p t pt p + t 0 t p t q pt p + qt q t Table.3: First two derivatives of the ten kernel functions. for p =. The third kernel function has been studied in [Bai03b]. The fourth kernel function has been studied in [Pen00a]; one may easily verify that it is a special case of ψ 7 t), when taking q = 3. The fifth and sixth kernel functions have been studied in [Bai04a]. The seventh kernel function has been studied in [Pen0; Pen0b]. Also note that ψ t) is the limiting value of ψ 7 t) when q approaches. In each of the first seven cases we can write ψt) as ψt) = t + ψ b t),..5) where t is the so-called growth term and ψ b t) the barrier term of the kernel function. The growth term dominates the behavior of ψt) when t goes to infinity, whereas the barrier term dominates its behavior when t approaches zero. Note that in all cases the barrier term is monotonically decreasing in t. The three last kernel functions in the table differ from the first seven others in that the growth terms, i.e., t, t+p q+ and t+p q+, respectively, are not quadratic in t. ψ 8 was first introduced and analyzed in [Bai04a], ψ 9 was analyzed in [Gha04a] and ψ 0 has been studied for second order cone optimization in [Bai04b].

35 . A NEW CLASS OF KERNEL FUNCTIONS.8.6 ψ ψ 4.4. ψt) ψ 9 p=0) t Figure.: Three different kernel functions. Figure. demonstrates the growth and barrier behavior of the three kernel functions ψ, ψ 4 and ψ 9 with p = 0). From this figure we can see that the growth behaviors of ψ and ψ 4 are quite similar as t, and that ψ 9 with p = 0) grows much slower. However, when t 0, ψ and ψ 9 with p = 0) are quite similar whereas ψ 4 grows much faster. Now we proceed by showing that the ten kernel functions satisfy conditions..3-a),..3-c),..3-d), and..3-e). By using the information from Table.3 one may easily construct the entries in Table.4. It is almost obvious that all ten functions satisfy the condition..3-a) and from the second column in Table.5 we can see that the ten functions satisfy the condition..3-c). Also from the third column in Table.4 it is immediately seen that the first seven functions satisfy..3-b). Lemma..4 implies that the first seven functions satisfy also..3-e). The last column in Table.5 makes clear that ψ 8, ψ 9 and ψ 0, also satisfy..3-e). It remains to deal with..3-d). For this we use Table.6.

36 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS. i tψ i t) + ψ i t) tψ i t) ψ i t) t t t + q q t q ) 3 t + e ) e t+)e t +t )e t e t e ) e t ) 3 ee t ) q+)t q +q q 4 t + t 3 4 t 3 5 t + +t t 3 e t 6 t + t t e t te t +) e t + +3t t e 3 t +t t e t 7 t + q ) t q q + ) t q 8 + q ) t q + q + ) t q ) 9 + p) t p p ) t p + t 0 p + ) t p + q ) t q p ) t p + q + ) t q Table.4: The conditions..3-a) and..3-b). This table immediately shows that ψ, ψ 4, ψ 8, ψ 9, and ψ 0 satisfy..3-d). The five remaining kernel functions also satisfy..3-d), as can be shown by simple, but rather technical arguments. It may be noted that in [Pen0b] the kernel function ψt) is defined to be self-regular if ψt) is e-convex and, moreover, ψ t) = Θ Υ p,qt) ), where Υ p,q t) was defined in..4). Since Υ p,qt) = t p + t q, Υ p,qt) = p )t p q + )t q, the prototype self-regular kernel function satisfies..3-c) only if p. Note that the kernel functions ψ, ψ 4 and ψ 7 are self-regular. It was observed in [Sal04a] that ψ 5 in Table. is the limit of the following sequence of functions ψ k) t) = t + + ) k + ) k + ) ) k, k =,,... k kt k

37 .3 ALGORITHM 3 i ψ i t) Condition..3-e) t 3 β ) βt q + ) t q 3 et e ) ee t ) e t + 4e t + ) 4 4 4β 4 ) t 5 β 3 t t+6t t 6 e t 6 +t t 4 e t 7 q q + ) t q q )β q+ ) β q t q 8 q q + ) t q q β q ) t q+ 9 p p) t p t 3 t p +p)β p+ ) βt 0 p p) t p q q + ) t q p+q)β p β q ) t q+ p Table.5: The conditions..3-c) and..3-e). By using Lemma.. from [Pen0b], one can show that ψ k) t) is a S-R function for every k. Furthermore, for any fixed t > 0, one has lim ψ k)t) = t + e t = ψ 5 t). k This result implies that ψ 5 is the limit point of a sequence of S-R functions. Since ψ 5 itself is not S-R, it follows that the set of S-R functions is not closed. Note also that in our table only the first four kernel functions are S-R, and the two kernel functions ψ 8 and ψ 9 lie outside the closure of the set of S-R functions if p <..3 Algorithm In principle any kernel function gives rise to a primal-dual algorithm. The generic form of this algorithm is shown in Figure.. The parameters τ, θ, and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by the algorithm is as small as possible. Obviously, the resulting iteration bound will depend on the kernel function underlying the algorithm, and our main task becomes to find a kernel

38 4 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS.3 i ψ i t) ψ i t)ψ i t) + 6 t + q ) ) t + q+) q+ t t + t q q+ q ) ) 3 + e ) e t e t +) + t et e ) e t e ) ee t ) 3 ee t ) ee t ) e t + 4e t + )) t t ) 8 5 t t t) e ) ) 8 t + 6t + 6t e t t 3 e t 6 t 4 + e t ) + + t) t e t ) e t ) 7 + q t q+ ) + qq+)t q+ ) t q+) 8 q q + q + ) t q ) t q+) 9 0 t +p p +3p+)+pp+)t +p t 4 pp+)t p +q p+4pq+p +q)t p q +qq )t q t Table.6: The condition..3-d). function that minimizes the iteration bound..3. Upper bound for Ψv) after each outer iteration Note that at the start of each outer iteration of the algorithm, just before the update of µ, we have Ψv) τ. By updating µ, the vector v is divided by θ, which generally leads to an increase in the value of Ψv). Then, during the subsequent inner iterations, Ψv) decreases until it passes the threshold τ again. Hence, during the course of the algorithm the largest values of Ψv) occur just after the updates of µ. That is why in this section we derive an estimate for the effect of a µ-update on the value of Ψv). In other words, with β = θ, we want to find an upper bound for Ψβv) in terms of Ψv). It will become clear that in the analysis of the algorithm some inverse functions related to the underlying kernel functions and its first derivative play a crucial role. We introduce these inverse functions here.

39 .3 ALGORITHM 5 We denote by ϱ : [0, ) [, ) and ρ : [0, ) 0, ] the inverse functions of ψt) for t, and ψ t) for t, respectively. In other words We have the following result. s = ψt) t = ϱs), t,.3.) s = ψ t) t = ρs), t..3.) Theorem.3.. For any positive vector v and any β >, we have )) Ψv) Ψβv) nψ βϱ. n Proof. We consider the following maximization problem: max {Ψβv) : Ψv) = z}, v where z is any nonnegative number. The first order optimality conditions for this problem are βψ βv i ) = λψ v i ), i =,..., n,.3.3) where λ denotes the Lagrange multiplier. Since ψ ) = 0 and βψ β) > 0, we must have v i for all i. We even may assume that v i > for all i. To see this, let z i be such that ψv i ) = z i. Given z i, this equation has two solutions: v i = v ) i < and v i = v ) i >. As a consequence of Lemma..8 we have ψβv ) i ) ψβv ) i ). Since we are maximizing Ψβv), it follows that we may assume v i = v ) i >. This means that without loss of generality we may assume that v i > for all i. Note that then.3.3) implies βψ βv i ) > 0 and ψ v i ) > 0, whence also λ > 0. Now defining gt) = ψ t) ψ βt), t, we deduce from.3.3) that gv i ) = β λ for all i. We proceed by showing that this implies that all v i s are equal by proving that gt) is strictly monotonic. One has g t) = ψ t)ψ βt) βψ t)ψ βt) ψ βt)). Using that ψt) satisfies condition..3-e), we see that g t) > 0 for t >, since β >. Thus we have shown that gt) is strictly increasing. It thus follows that all v i s are equal. Putting v i = t >, for all i, we deduce from Ψv) = z that

40 6 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS.3 nψt) = z. This implies that t = ϱ z n ). Hence the maximal value that Ψv) can attain is given by z Ψ βt) = nψ βt) = nψ βϱ = nψ n)) This proves the theorem. βϱ Ψv) n )). Remark.3.. Note that the bound of Theorem.3. is sharp: one may easily verify that if v = β, with β, then the bound holds with equality. As a result of Theorem.3. we have that if Ψv) τ and β = θ then ϱ τ ) ) n L ψ n, θ, τ) := nψ θ v is an upper bound for Ψ θ ), the value of Ψv) after the µ-update. Corollary.3.3. For any positive vector v and any β >, we have L ψ n, θ, τ) n ϱ τ ψ n ) ). θ ).3.4) Proof. Since θ > and ϱ τ n ), the corollary follows from Theorem.3. by using Lemma Decrease of the barrier function during an inner iteration In this section, we compute a default step size α and the resulting decrease of the barrier function function. After a damped step we have x + = x + α x, y + = y + α y, s + = s + α s. Hence, recalling from.3.5) and.3.6) that xs v := µ, d x := v x x, d s := v s s, we have and x + = x e + α x ) = x e + α d ) x = x x v v v + αd x), s + = s e + α s ) = s e + α d ) s = s s v v v + αd s).

41 .3 ALGORITHM 7 Thus we obtain, using xs = µv, Hence, v + = x +s + µ = v + αd x ) v + αd s )..3.5) ) fα) := Ψ v + ) Ψ v) = ψ v + αdx ) v + αd s ) Ψ v). It is clear that fα) is not necessarily convex in α. To simplify the analysis we use a convex upper bound for fα). Such a bound is obtained by using that ψt) is e-convex. This gives ) Ψ v + ) = Ψ v + αdx ) v + αd s ) = n i= Therefore fα) f α), where ) ψ vi + αd xi ) v i + αd si ) n ψ v i + αd xi ) + i= ) n ψ v i + αd si ) i= = Ψ v + αd x) + Ψ v + αd s )). f α) := Ψ v + αd x) + Ψ v + αd s )) Ψ v), which is convex in α, because Ψv) is convex. Obviously, f0) = f 0) = 0. Taking the derivative respect to α, we get f α) = n ψ v i + αd xi ) d xi + ψ v i + αd si ) d si ). i= This gives, using.3.) and..), f 0) = Ψv)T d x + d s ) = Ψv)T Ψv) = δv)..3.6) Differentiating once more, we obtain f α) = n ψ v i + αd xi ) d xi + ψ ) v i + αd si ) d si..3.7) i= Example: Let ψt) = t + t = ψ 8, q = ), and n =. For v =, d x =, d s =, it is easy to verify that ψ + α) α)), is not convex.

42 8 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS.3 Below we use the following notation: v := minv), δ := δv). Lemma.3.4. One has f α) δ ψ v αδ). Proof. Since d x and d s are orthogonal,..) implies that d x ; d s ) = δ. Therefore, d x δ and hence d s δ, and v i + αd xi v αδ, v i + αd si v αδ, i n..3.8) Due to..3-c), ψ t) is monotonically decreasing, so from.3.7) we obtain f α) ψ v αδ) This proves the lemma. n ) dxi + d si = δ ψ v αδ). i= Lemma.3.5. f α) 0 holds if α satisfies the inequality ψ v αδ) + ψ v ) δ..3.9) Proof. We may write, using Lemma.3.4, and also.3.6), f α) = f 0) + α 0 f ξ) dξ α δ + δ ψ v ξδ) dξ = δ δ 0 α 0 ψ v ξδ) d v ξδ) = δ δ ψ v αδ) ψ v )). Hence, f α) 0 will certainly hold if α satisfies ψ v αδ) + ψ v ) δ, which proves the lemma. The next lemma uses the inverse function ρ : [0, ) 0, ] of ψ t) for t 0, ], as introduced in.3.). Lemma.3.6. The largest step size α that satisfies.3.9) is given by ᾱ := ρ δ) ρ δ))..3.0) δ

43 .3 ALGORITHM 9 Proof. We want α such that.3.9) holds, with α as large as possible. Since ψ t) is decreasing, the derivative to v of the expression at the left in.3.9) i.e. ψ v αδ) + ψ v )) is negative. Hence, fixing δ, the smaller v is, the smaller α will be. One has δ = Ψv) ψ v ) ψ v ). Equality holds if and only if v is the only coordinate in v that differs from, and v in which case ψ v ) 0). Hence, the worst situation for the step size occurs when v satisfies ψ v ) = δ..3.) The derivative to α of the expression at the left in.3.9) equals δψ v αδ) 0, and hence the left-hand side is increasing in α. So the largest possible value of α satisfying.3.9), satisfies ψ v αδ) = δ..3.) Due to the definition of ρ,.3.) and.3.) can be written as This implies, v = ρ δ), v αδ = ρ δ). α = δ v ρ δ)) = ρ δ) ρ δ)), δ proving the lemma. Lemma.3.7. Let ᾱ be as defined in Lemma.3.6. Then Proof. By the definition of ρ, ᾱ Taking the derivative to δ, we find ψ ρ δ))..3.3) ψ ρδ)) = δ. ψ ρδ)) ρ δ) =,

44 30 PRIMAL-DUAL IPMS FOR LO BASED ON KERNEL FUNCTIONS.3 which implies that ρ δ) = ψ < ) ρδ)) Hence ρ is monotonically decreasing in δ. An immediate consequence of.3.0) and.3.4) is ᾱ = δ ρ σ) dσ = δ dσ δ δ ψ ρσ))..3.5) δ To obtain a lower bound for ᾱ, we want to replace the argument of the last integral by its minimal value. So we want to know when ψ ρσ)) is maximal, for σ [δ, δ]. Due to..3-c), ψ is monotonically decreasing. So ψ ρσ)) is maximal when ρσ) is minimal for σ [δ, δ]. Since ρ is monotonically decreasing this occurs when σ = δ. Therefore δ ᾱ = δ δ which proves the lemma. δ dσ ψ ρσ)) δ δ ψ ρδ)) = ψ ρδ)), In the sequel we use the notation α = ψ ρδ)),.3.6) and we will use α as the default step size. By Lemma.3.7 we have ᾱ α. Lemma.3.8. If the step size α is such that α ᾱ then Proof. Let hα) be defined by Then fα) α δ..3.7) h α) := αδ + αδψ v ) ψ v ) + ψ v αδ). h0) = f 0) = 0, h 0) = f 0) = δ, h α) = δ ψ v αδ). Due to Lemma.3.4, f α) h α). As a consequence, f α) h α) and f α) hα). Taking α ᾱ, with ᾱ as defined in Lemma.3.6, we have α h α) = δ + δ ψ v ξδ) dξ 0 = δ δ ψ v αδ) ψ v )) 0.

45 .3 ALGORITHM 3 Since h α) is increasing in α, using Lemma A..3, we may write f α) hα) αh 0) = αδ. Since fα) f α), the proof is complete. By combining the results of Lemmas.3.7 and.3.8 we obtain Theorem.3.9. With α being the default step size, as given by.3.6), one has δ f α) ψ ρ δ))..3.8) Lemma.3.0. The right-hand side expression in.3.8) is monotonically decreasing in δ. Proof. Putting t = ρδ), which implies t, and which is equivalent to 4δ = ψ t), t is monotonically decreasing if δ increases. Hence, the right-hand expression in.3.8) is monotonically decreasing in δ if and only if the function gt) := ψ t)) 6 ψ t) is monotonically decreasing for t. Note that g) = 0 and g t) = ψ t)ψ t) ψ t) ψ t) 6 ψ t). Hence, since ψ t) < 0 for t <, gt) is monotonically decreasing for t if and only if ψ t) ψ t)ψ t) 0, t. The last inequality is satisfied, due to condition..3-d). Hence the lemma is proved. Theorem.3.9 expresses the decrease of the barrier function value during a damped step, will step size α, as a function of δ, and this function is monotonically decreasing in δ. In the sequel we need to express the decrease as a function of Ψv). To this end we need a lower bound on δv) in terms of Ψv). Such a bound is provided in the following section.

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point

More information

Curvature as a Complexity Bound in Interior-Point Methods

Curvature as a Complexity Bound in Interior-Point Methods Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/38444 holds various files of this Leiden University dissertation Author: Haan, Arthur den Title: Nuclear magnetic resonance force microscopy at millikelvin

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Cover Page. The handle holds various files of this Leiden University dissertation.

Cover Page. The handle  holds various files of this Leiden University dissertation. Cover Page The handle http://hdl.handle.net/1887/45233 holds various files of this Leiden University dissertation. Author: Rijk, B. de Title: Periodic pulse solutions to slowly nonlinear reaction-diffusion

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Cover Page. The handle holds various files of this Leiden University dissertation.

Cover Page. The handle   holds various files of this Leiden University dissertation. Cover Page The handle http://hdl.handle.net/1887/20139 holds various files of this Leiden University dissertation. Author: Dahlhaus, Jan Patrick Title: Random-matrix theory and stroboscopic models of topological

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Lecture 15: October 15

Lecture 15: October 15 10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

Mean-field Description of the Structure and Tension of Curved Fluid Interfaces. Joris Kuipers

Mean-field Description of the Structure and Tension of Curved Fluid Interfaces. Joris Kuipers Mean-field Description of the Structure and Tension of Curved Fluid Interfaces Joris Kuipers Mean-field description of the structure and tension of curved fluid interfaces / J. Kuipers Copyright c 2009,

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information