AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
|
|
- Juniper Chambers
- 5 years ago
- Views:
Transcription
1 October 13, Revised November AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University Rotterdam P.O. Box 1738, 3000 DR Rotterdam, The Netherls ABSTRACT We propose a polynomial time primal-dual potential reduction algorithm for linear programming. The algorithm generates sequences d k v q q k rather than a primal-dual interior point (x k s k ), where d k i = x k i =sk i vi k = x k i sk i for i =1 ::: n. Only one element of d k is changed in each iteration, so that the work per iteration is bounded by O(mn) using rank-one updating techniques. The usual primal-dual iterates x k s k are not needed explicitly in the algorithm, whereas d k v k are iterated so that the interior primal-dual solutions can always be recovered by afore mentioned relations between (x k s k ) (d k v k ) with improving primal-dual potential function values. Moreover, no approximation of d k is needed in the computation of projection directions. Key words: linear programming, interior point method, potential function. 1 Econometric Institute, Erasmus University Rotterdam, The Netherls, sturm@few.eur.nl. Econometric Institute, Erasmus University Rotterdam, The Netherls, zhang@few.eur.nl
2 Sturm Zhang: IPM based on rank-one updates 1 1. Introduction In his seminal paper [8], Karmarkar proposed a projective potential reduction method that solves linear programs in O(nL) main iterations, where n sts for the number of inequality constraints, L is the input length of the problem. A single iteration of Karmarkar's algorithm requires solving a system of linear equations, which takes O(n 3 ) operations in the worst case. By using incomplete updates of this equation system, the average work per step can be reduced to O(n :5 ). Hence, Karmarkar's algorithm achieves an overall complexity of O(n 3:5 L) for linear programming. The O(n 3:5 L) complexity was further reduced to O(n 3 L)byVaidya [17] Gonzaga [5]. Kojima, Mizuno Yoshise [9] Monteiro Adler [13] showed that this complexity can also be achieved by the primal-dual path-following method, if partial updates are used. However, the algorithms proposed by [8, 17, 5, 9, 13] use a xed step length per iteration, which makes the algorithms inecient for practical purposes. Adaptive step algorithms that are based on a partial updating scheme were proposed by Anstreicher [1], Anstreicher Bosch [], Den Hertog [6], Den Hertog, Roos Vial [7], among others. The afore mentioned partial updating methods require O(n :5 )operationsonaverage per iteration. Algorithms that require on average only O(n ) operations per iteration were proposed by Ye[19] Mizuno [11]. Interestingly, Mizuno [1] also obtained O(n 3 L) complexity with a potential reduction algorithm that requires at most O(n ) operations per iteration, as opposed to [19, 11] where each iteration requires O(n ) operations on average. At each iteration of Mizuno's method [1], the amount of required operations is thus comparable to that of one simplex pivot step. A variant of Mizuno's algorithm that allows for up to any xed number of updates is analysed by Bosch[3]. The algorithm to be proposed in this paper is similar to Mizuno's algorithm [1] in the sense that it requires only O(n ) operations per main iteration. However, the way in which the new algorithm computes the iterative primal dual solutions is dierent. In general, the major computational eorts needed in interior point methods for linear programming are related to the computation of projections. Denote x k s k to be the primaldual iterates at iteration k. We scale the linear program by a positive vector d k, then determine a vector v k as the unique intersection point of the scaled primal dual feasible sets. For primal methods, one needs to project onto the null space of AX k, where X k = diag(x k 1 xk ::: xk n). For dual methods, the projection onto the range space of (S k ) ;1 A T is needed with S k = diag(s k 1 sk ::: sk n). Finally, for primal-dual methods, one has to project onto the null space of AD k with D k = diag(d k 1 dk ::: dk n). Previous partial updating methods work with approximations of either X k, S k or D k, which are updated in such away that the computational costs are reduced. In our new method, the scaling vector d k is computed
3 Sturm Zhang: IPM based on rank-one updates exactly as (X k ) 1= (S k ) ;1= e.however, we change only one component ofd k at a time. As a result, only O(n ) arithmetic operations is needed using a rank-one updating scheme for the projection matrices. In this procedure, no approximation is involved. Another feature of our new algorithm is that the traditional primal dual iterates x k s k are invisible in the solution procedure. Their functions are completely replaced by the new vectors d k v k. Unfortunately, our algorithm seems to be less eective in reducing the potential function value than Mizuno's algorithm. To be more precise, we arrive atano(n 1:5 L)worst case iteration bound. Hence, the new algorithm needs O(n 3:5 L) operations in total, just like Karmarkar's algorithm. The paper is organized as follows. In Section, we briey discuss the primal-dual geometry of linear programming. The eect of a rank-1 update on the primal-dual solution pair is derived in Section 3. Subsequently, the new potential reduction algorithm is stated in Section 4, we proceed in Section 5 by deriving the complexity result. We conclude the paper in Section 6.. Problem discussions In this paper we consider the stard linear program: (P ) min x fc T x j Ax = b x 0g its dual (D) max y s fb T y j A T y + s = c s 0g where A is an mn matrix, n, b c are vectors of dimension m n respectively. We make the stard assumptions that A has full row rank that (P) (D) have interior feasible solutions. For given positive vector d < n ++, let F P (d) :=fx j ADx = bg where D := diag(d 1 d ::: d n ) let F D (d) :=fs j DA T y + s = Dc for some y < m g: Denoting the all-one vector by e, it is easily seen that primal feasibility amounts to x F P (e) x 0
4 Sturm Zhang: IPM based on rank-one updates 3 dual feasibility is equivalent to s F D (e) s 0: As F P (d) F D (d) are orthogonal complementary ane spaces, they share a single intersection point, say v d : fv d g = F P (d) \F D (d): Let x d := Dv d s d := D ;1 v d : Obviously, x d F P (e) s d F D (e). Because of the positivity ofd, we also have x d 0s d 0 if only if v d 0: A natural question arises: Is v d always nonnegative for positive d? Or equivalently: Are x d s d always feasible? As expected, the answer is negative. A simple example is: Example.1. Let A = h 1 h 1 1 i v d = yields 4 ; which has a negative component. i b = h 4 i c T = h 0 7 i. Then, choosing d T = If, on the other h, interior feasible solutions x F P (e), x>0s F D (e), s>0 are known, then it is easy to construct d such that v d > 0. Namely, letting q d i = x i =s i for i =1 ::: n yields (v d ) i = p x i s i for i =1 ::: n: This triggers the next question: Assuming that we have a pair of interior feasible solutions, is it possible to update the related vector d such that the feasibility is retained, at the same time, the solutions are improved?
5 Sturm Zhang: IPM based on rank-one updates 4 The goal of this paper is to give an armative answer to the above question. Namely, we propose an algorithm that generates a sequence fd k j k =0 1 g for which the associated sequence of primal-dual feasible pairs (x d k s d k)converges to a pair of optimal solutions for (P) (D). First we derive formulas to express v d, x d s d for a given vector d < n ++.LetQ d P d denote the projection matrices onto the image of DA T the kernel of AD respectively, i.e. Q d := DA T (AD A T ) ;1 AD P d := I ; Q d : By denition, v d F P (d) \F D (d), i.e. ADv d = b (1) DA T y d + v d = Dc () for some y d < m. Pre-multiplying (1) with DA T (AD A T ) ;1 yields Q d v d = DA T (AD A T ) ;1 b pre-multiplying () with P d yields P d v d = P d Dc: Hence, v d = Q d v d + P d v d = DA T (AD A T ) ;1 b +(I ; DA T (AD A T ) ;1 AD)Dc which satises (1) () with y d =(AD A T ) ;1 (AD c ; b): (3) 3. Single component update In this section, we analyze the eect of a change in a single component ofd on the associated pair (x d s d ). Suppose that we update component d j, for some j f1 ng. For a step size t>;1, let the new vector of scalings d(t) be dened as follows: d j (t) = p 1+td j
6 Sturm Zhang: IPM based on rank-one updates 5 d i (t) =d i for i f1 ngnfjg i.e. the components other than j remain unchanged. Let We dene D(t) := diag(d 1 (t) d (t) ::: d n (t)): u d := (AD A T ) ;1 ADe j where e j is the j-th unit vector. Since AD(t) A T = AD A T + t(ade j )(ADe j ) T (ADe j ) T (AD A T ) ;1 (ADe j )=e T j Q de j = e T j QT d Q de j = kq d e j k using the rank-1 updating formula of Sherman, Morrison Woodbury, we obtain (AD(t) A T ) ;1 =(AD A T ) ;1 ; Furthermore, AD(t) c = AD c + t(d j c j )ADe j : It follows using (3) that y d(t) = (AD(t) A T ) ;1 (AD(t) c ; b) = y d + t(d j c j )u d From (3) () we have t 1+t kq d e j k u du T d : ;t t(d jc j ) kq d e j k + u T d (AD c ; b) 1+t kq d e j k u d : u T d (AD c ; b)=e T j DAT y d = d j c j ; (v d ) j therefore y d(t) = y d + Now it follows that t(v d ) j 1+t kq d e j k u d: Ds d(t) = Dc ; DA T y d(t) t = v d ; 1+tkQ d e j k (v d) j Q d e j : (4)
7 Sturm Zhang: IPM based on rank-one updates 6 As x d(t) = D(t)v(t)=D(t) s d(t),wehave D ;1 x d(t) = (D ; D(t) )Ds d(t) We introduce (t), (t) := = (I + te j e T j )Ds d(t) t = v d ; 1+tkQ d e j k (v d) j Q d e j +t(v d ) j e j ; t kq d e j k 1+tkQ d e j k (v d) j e j = t v d + 1+tkQ d e j k (v d) j P d e j : (5) t 1+t kq d e j k which is a strictly increasing function for t>;1. This function maps the interval (;1 1) into the interval f jkp d e j k >;1 kq d e j k <1g: The inverse transformation of (t) is t() = 1 ; kq d e j k : (6) From now on, we consider as the step size parameter, we compute t() from(6).asa matter of notation, we write x() :=x d(t()) s():=s d(t()) v() :=v d(t()) : If no confusion is possible, we will dispense with the argument d simply write P, Q, v, x, s y to denote P d, Q d, v d, x d, s d y d.nowwe rewrite (5) (4) as follows: D ;1 x() =v + v j Pe j (7) Ds() =v ; v j Qe j : (8) These relations imply that kv()k = x() T s() = (v + v j Pe j ) T (v ; v j Qe j ) = kvk + v j e T j (P ; Q)v: (9)
8 Sturm Zhang: IPM based on rank-one updates 7 Hence, the duality gap between x() s() is linear in. As is common in the interior point methodology, we restrict ourselves to positive, i.e. interior, solutions x>0 s>0. Introducing the quantity v min := minfv 1 v v n g we obtain from (7) (8) that x() > 0 s() > 0 as long as j j< v min =v j. 4. A potential reduction algorithm Consider for x>0 s>0 the Tanabe-Todd-Ye primal-dual potential function [16, 18] (x s) =(n + l)log(x T s) ; nx i=1 log(x i s i ) ; n log n where we letl := p n in this paper. From the arithmetic-geometric mean inequality, it is easily seen that (x s) p n log(x T s) equality holds only if v = X 1= S 1= e is a positive multiple of the all-one vector e. This observation is the essence of the proof of the following lemma, which is due to Todd Ye [16] Ye [18]. Lemma 4.1. If an algorithm reduces (x s) by at least a xed quantity >0 in each iteration, then the algorithm can be used to solve the pair (P), (D) in O( p nl=) iterations. Let p ij := e T i Pe j for i =1 ::: n q ij := e T i Qe j for i =1 ::: n: For j j< v min =v j,wehave from (7), (8) (9) that (x() s()) = (x s)+(n + p n) log(1 + v je T j ; nx i=1 log(1 + v j v i p ij ) ; nx i=1 (P ; Q)v kvk ) log(1 ; v j v i q ij ): (10)
9 Sturm Zhang: IPM based on rank-one updates 8 Let where r := (P ; Q)( (n + p n) kvk v ; V ;1 e) (11) V := diag(v 1 v ::: v n ): It is easily seen from (10) that (x() s()) = (x s)+v j r j + o(): We propose to select the index j such that j r j j= krk 1. This choice results in the following algorithm. Algorithm 1. (A,b,c,d 0 ) The initial scaling d 0 > 0 is chosen such that v d 0 > 0. Step 0 Initialization. Set k =0. Step 1 Optimality test.let d = d k.stopifbased on(x d s d ) an optimal pair (x s ) can be found. Step Choose the updating index. Compute r according to (11). Set j = arg max 1in j r i j Step 3 Step size choice. Compute such that (x d () s d ()) <(x d s d ) ; 1 6(n +) Step 4 Take step. Set d k+1 = d +( p 1+t() ; 1)d j e j. Step 5 Set k = k +1 return to Step 1. Notice that in order to perform the algorithm it is necessary to iteratively compute v r. To do this, it is sucient to update the matrices (AD A T ) ;1 (AD A T ) ;1 AD. Using the rank-one updating formula, this will take O(mn) operations, whereas a main iteration of other interior point algorithms typically requires O(m n) operations in the worst case. Also notice that the index selection rule, i.e. Step, is similar to the most negative pivot rule of the simplex method.
10 Sturm Zhang: IPM based on rank-one updates 9 5. Convergence analysis In estimating the reduction (x s) ; (x() s()), we shall make use of the inequality log(1 + ) ; (1; j j) which is proven by Karmarkar [8]. Applying (1) to (10) yields for j j< v min =v j for any (;1 1) (1) (x s) ; (x() s()) ;v j r j ; v j kpe j k + kqe j k vmin (1; j j v j =v min ) = (v j =v min ) ;v j r j ; (1; j j v j =v min ) : (13) Due to the index selection rule in Algorithm 1, there holds j r j j= krk 1. The following lemma provides a lower bound on krk 1. Lemma 5.1. It holds that Proof: krk 1 > 1 p nvmin : Obviously, wehave forany w < n that k(p ; Q)wk = k(p + Q)wk = kwk : Therefore, we have krk = (n + p n) kvk v ; V ;1 e = p n kvk v + n kvk v ; V ;1 e : Using v? n kvk v ; V ;1 e, it follows that krk = n kvk + n = > kvk v ; V ;1 e n kvk +( n kvk v min ; 1 ) v min 1 v min 1 v min : + n v min kvk 4 : Since krk 1 krk = p n, (14) implies the desired result. (14)
11 Sturm Zhang: IPM based on rank-one updates 10 Combining Lemma 5.1 (13) we obtain a lower bound on the reduction (x s) ; (x() s()): Lemma 5.. For = ; jr jj r j Proof: (x s) ; (x() s()) > v p min =v j, we have 3 n+ 1 3(n +) : From (13) the fact that j r j j= krk 1 it follows that (x s) ; (x() s()) ; r j j r j j v (v j =v min ) j krk 1 ; (1; j j v j =v min ) : Therefore, by choosing = ; jr jj v r min =v j j 3 n+ using Lemma 5.1, we obtain (x s) ; (x() s()) > p 3 n(n +) ; 9(n + )(1 ; =(3 p n +)) > 3(n +) (1 ; 1 )= 1 3(n +) : Lemma 5. provides a possible step size rule for Step 3 in Algorithm 1. A better step size may be obtained by means of some line search procedure. Combining Lemma 4.1 Lemma 5., we obtain the main result of this paper: Theorem 5.1. Algorithm 1 solves (P) (D) in O(mn :5 L) operations. Proof: From Lemma 5. we know that in every iteration, the potential function is reduced by 1 at least a quantity of. Using Lemma 4.1, this implies that Algorithm 1 can be used 3(n+) to solve (P) (D) in O(n 1:5 L) iterations. A single iteration involves O(mn) operations. Thus, in total the algorithm requires O(mn :5 L) operations in the worst case. 6. Concluding Remarks It is known that interior point algorithms usually require signicantly fewer iterations for solving linear programs than simplex algorithms. Yet, the simplex method can outperform the interior point method in running time on a wide range of problems because of the low computational complexity per step. Similar to the algorithm of Mizuno [1], the interior point algorithm that we proposed in this paper has the property that the amount ofwork per step is comparable to the work in a simplex iteration. Using a new approach for computing the primal dual iterates, we do not need to distinguish \real" \approximate" scaling vectors as in [1, 3].
12 Sturm Zhang: IPM based on rank-one updates 11 References [1] K.M. Anstreicher, \A stard form variant, safeguarded linesearch, for the modied Karmarkar algorithm," Mathematical Programming 47 (1990) [] K.M. Anstreicher R.A. Bosch, \Long steps in a O(n 3 L) algorithm for linear programming," Mathematical Programming 54 (199) [3] R.A. Bosch, \On Mizuno's rank one updating algorithm for linear programming," SIAM Journal on Optimization 3 (1993) [4] R.A. Bosch K.M. Anstreicher, \On partial updating in a potential reduction linear programming algorithm of Kojima, Mizuno, Yoshise," Algorithmica 9 (1993) [5] C.C. Gonzaga, \An algorithm for solving linear programming problems in O(n 3 L) operations," in: N. Megiddo, ed. Progress in Mathematical Programming, Springer{Verlag (Berlin, 1989) 1-8. [6] D. Den Hertog, Interior point approach to linear, quadratic convex programming, algorithms complexity. Kluwer Publishers, Dordrecht, The Netherls, [7] D. Den Hertog, C. Roos J.-Ph. Vial, \A complexity reduction for the long{step path{ following algorithm for linear programming," SIAM Journal on Optimization (199) [8] N.K. Karmarkar, \A new polynomial-time algorithm for linear programming," Combinatorica 4 (1984) [9] M. Kojima, S. Mizuno A. Yoshise, \A polynomial{time algorithm for a class of linear complementarity problems," Mathematical Programming 44 (1989) 1-6. [10] S. Mehrotra, \Deferred rank one updates in O(n 3 L)interior point algorithms," Technical Report 90-11, Dept. of Industrial Engineering Management Sciences, Northwestern University (Evanston, IL, 1990). [11] S. Mizuno, \O(n L) iteration O(n 3 L) potential reduction algorithms for linear programming," Linear Algebra its Applications 15 (1991) [1] S. Mizuno, \A rank one updating algorithm for linear programming," The Arabian Journal for Science Engineering 15 (1990) [13] R.D.C. Monteiro I. Adler, \Interior path following primal{dual algorithms. Part II: convex quadratic programming," Mathematical Programming 44 (1989) [14] J. Renegar, \A polynomial time algorithm, based on Newton's method, for linear programming," Mathematical Programming 40 (1988) [15] D.F. Shanno, \Computing Karmarkar projections quickly," Mathematical Programming 41 (1988) [16] M.J. Todd Y. Ye, \A centered projective algorithm for linear programming," Mathematics of Operations Research 15 (1990) [17] P.M. Vaidya, \An algorithm for linear programming which requires O(((m + n)n +
13 Sturm Zhang: IPM based on rank-one updates 1 (m + n) 1:5 n)l) arithmetic operations," Proceedings of the 19th ACM Symposium on the Theory of Computing (1987) [18] Y. Ye, \An O(n 3 L) potential reduction algorithm for linear programming," Mathematical Programming 50 (1991) [19] Y. Ye, \Further developments in potential reduction algorithm," Technical Report, Department of Management Sciences, University ofiowa, 1989.
A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More information1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye
Probabilistic Analysis of an Infeasible-Interior-Point Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationOn a wide region of centers and primal-dual interior. Abstract
1 On a wide region of centers and primal-dual interior point algorithms for linear programming Jos F. Sturm Shuzhong Zhang y Revised on May 9, 1995 Abstract In the adaptive step primal-dual interior point
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationfrom the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise
1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationAPPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract
APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationA Simpler and Tighter Redundant Klee-Minty Construction
A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationA Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization. April 2003; revisions on October 2003, and March 2004
A Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization Szewan Lam, Duan Li, Shuzhong Zhang April 2003; revisions on October 2003, and March 2004 Abstract In this paper,
More informationA Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems
A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationA Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationNumerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University
Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998
More informationLimit Analysis with the. Department of Mathematics and Computer Science. Odense University. Campusvej 55, DK{5230 Odense M, Denmark.
Limit Analysis with the Dual Ane Scaling Algorithm Knud D. Andersen Edmund Christiansen Department of Mathematics and Computer Science Odense University Campusvej 55, DK{5230 Odense M, Denmark e-mail:
More informationAbstract. A new class of continuation methods is presented which, in particular,
A General Framework of Continuation Methods for Complementarity Problems Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1990 Abstract. A new class of continuation methods is presented which,
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationthat nds a basis which is optimal for both the primal and the dual problems, given
On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationRoom 225/CRL, Department of Electrical and Computer Engineering, McMaster University,
SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear
More informationAn E cient A ne-scaling Algorithm for Hyperbolic Programming
An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationA Strongly Polynomial Simplex Method for Totally Unimodular LP
A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More informationOn the Number of Solutions Generated by the Simplex Method for LP
Workshop 1 on Large Scale Conic Optimization IMS (NUS) On the Number of Solutions Generated by the Simplex Method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology November 19 23,
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationA WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationLecture 15: October 15
10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have
More information462 Chapter 11. New LP Algorithms and Some Open Problems whether there exists a subset of fd 1 ::: d n g whose sum is equal to d, known as the subset
Chapter 11 NEW LINEAR PROGRAMMING ALGORITHMS, AND SOME OPEN PROBLEMS IN LINEAR COMPLEMENTARITY Some open research problems in linear complementarity have already been posed among the exercises in previous
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationA numerical implementation of a predictor-corrector algorithm for sufcient linear complementarity problem
A numerical imlementation of a redictor-corrector algorithm for sufcient linear comlementarity roblem BENTERKI DJAMEL University Ferhat Abbas of Setif-1 Faculty of science Laboratory of fundamental and
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationNew Interior Point Algorithms in Linear Programming
AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point
More informationOn Conically Ordered Convex Programs
On Conically Ordered Convex Programs Shuzhong Zhang May 003; revised December 004 Abstract In this paper we study a special class of convex optimization problems called conically ordered convex programs
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationA Polynomial Column-wise Rescaling von Neumann Algorithm
A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationא K ٢٠٠٢ א א א א א א E٤
المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationExample Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones
Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More informationMarch 18,1996 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING 1 Zhi-Quan Luo 2, Jos F. Sturm 3 and Shuzhong Zhang 3 ABSTRACT This paper consider
March 8,99 DUALITY AND SELF-DUALITY FOR CONIC CONVEX PROGRAMMING Zhi-Quan Luo, Jos F. Sturm and Shuzhong Zhang ABSTRACT This paper considers the problem of minimizing a linear function over the intersection
More informationy Ray of Half-line or ray through in the direction of y
Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationApproximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko
Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru
More informationOptimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.
Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationA Sublinear Parallel Algorithm for Stable Matching. network. This result is then applied to the stable. matching problem in Section 6.
A Sublinear Parallel Algorithm for Stable Matching Tomas Feder Nimrod Megiddo y Serge A. Plotkin z Parallel algorithms for various versions of the stable matching problem are presented. The algorithms
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationOn implementing a primal-dual interior-point method for conic quadratic optimization
On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing
More informationInterior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications.
International Journal of Computational Science and Mathematics. ISSN 0974-3189 Volume 4, Number 2 (2012), pp. 91-102 International Research Publication House http://www.irphouse.com Interior Point Algorithm
More informationSVM May 2007 DOE-PI Dianne P. O Leary c 2007
SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer
More informationSOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.
SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationA QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING
A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationA Potential Reduction Method for Harmonically Convex Programming 1
A Potential Reduction Method for Harmonically Convex Programming 1 J. F. Sturm 2 and S. Zhang 3 Communicated by Z.Q. Luo 1. The authors like to thank Dr. Hans Nieuwenhuis for carefully reading this paper
More informationA Bound for the Number of Different Basic Solutions Generated by the Simplex Method
ICOTA8, SHANGHAI, CHINA A Bound for the Number of Different Basic Solutions Generated by the Simplex Method Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology December 12th, 2010 Contents
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More information