A Generalized Homogeneous and SelfDual Algorithm. for Linear Programming. February 1994 (revised December 1994)


 Joy Holt
 1 years ago
 Views:
Transcription
1 A Generalized Homogeneous and SelfDual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and selfdual (HSD) infeasibleinteriorpoint algorithm for linear programming (LP) is proposed in this paper. The algorithm does not need to start from a big M initial point, while achieving O( p n (?) L)iteration complexity by following a certain central path on a central surface in a neighborhood N (), where can be any number between 0 and, n is the number of variables and L is the data length of the LP problem. In particular, an algorithm is developed, where the searching direction is obtained by solving a Newton equation system without infeasible residual terms on its right hand side. Key words: Linear programming, homogeneous and selfdual linear feasibility model, interiorpoint algorithm Institute of Systems Science, Academia Sinica, Beijing 00080, China, and currently visiting at Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242, USA. Research supported in part by NSF Grant DDM y Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242, USA. Research supported in part by NSF Grant DDM
2 Introduction Consider a linear programming (LP) problem in the standard form: (LP ) minimize c T x subject to Ax = b; x 0; where c 2 R n, A 2 R mn and b 2 R m are given, x 2 R n, and T denotes transpose. (LP) is said to be feasible if and only if its constraints are consistent; it is called unbounded if there is a sequence fx k g such that x k is feasible for all k but c T x k!?. (LP) has a solution if and only if it is feasible and bounded. The dual problem of (LP) can be written as (LD) maximize b T y subject to A T y c; where y 2 R m. We call z = c? A T y 2 R n dual slacks. Denote by F the set of all x and (y; z) that are feasible for the primal and dual, respectively. Denote by F 0 the set of points with (x; z) > 0 in F. Assuming that the LP problem has a feasible interior point, Megiddo [8] and Bayer and Lagarias [] dened the central path for a feasible LP problem as C() = (y; x; z) 2 F 0 : Xz = e; = xt z n where X = diag(x). As! 0, this path goes to a strictly complementary solution of LP. Based on following the central path, Kojima et al. [4] developed a primaldual interiorpoint algorithm in which the searching direction is generated by solving the following Newton equation system in iteration k: A d x = 0;?A T d y?d z = 0; Z k d x +X k d z = k e? X k z k ; where k = ((x k ) T z k )=n, X k = diag(x k ), Z k = diag(z k ), and is a scalar parameter. Kojima et al. [4] proved that their algorithm is O(nL)iteration bounded, where L is the data length of (LP) with integer numbers. Later, Kojima et al. [5] and Monteiro and Adler [2] gave an O( p nl)iteration bound for such a primaldual interiorpoint algorithm by restricting all iterates in a 2norm neighborhood of the central path, i.e. (y k ; x k ; z k ) 2 N () = (y; x; z) 2 F 0 : kxz? ek for some 2 (0; =2]. (Typically, = =4, or for a predictorcorrector algorithm = =2 in the predictor step, see [].) Throughout the paper, k:k represents 2norm. Unless a LP problem has a feasible interior point and such a point is given, an interiorpoint algorithm has to start from an infeasible point, or from an interior point feasible for an articial problem. In theory,
3 a big M interior point suces for establishing complexity result. However, such a big M approach is not practical at all. Furthermore, a robust algorithm has to be able to correct possible error accumulated from computations even starting from a feasible interior point. Algorithms, which allow to start from a nonbig M initial point in both theory and practice, are called infeasible interiorpoint algorithms and reported perform very well in practice (see [6], [7], [9], [0], [4], [6], and [9]). Unlike for feasible algorithms (in which a feasible interior point is given as the initial point), the besttodate O( p nl)iteration complexity for infeasible interiorpoint algorithms was not established until Ye et al. [8] proposed an homogeneous and selfdual (HSD) algorithm. Recently Mizuno et al. [] studied the trajectories followed by many primaldual infeasible interiorpoint algorithms. For given (y 0 ; x 0 > 0; z 0 > 0), they dened two dimensional central surface fq(; ) : 0; 0g with < T x z Q(; ) = (y; x > 0; z > 0) : Xz = e; = : n r P A r0 = P A r D rd 0 ; where rp 0 = b? Ax 0 and rd 0 = c? A T y 0? z 0 ; r P = b? Ax and r D = c? A T y k? z k are primal and dual residuals respectively. If the LP problem possesses a solution, many primaldual infeasible interiorpoint algorithms (e.g., Kojima et al. [6], Lustig [7], Mehrotra [9]) follow some paths on this central surface and approach optimality and feasibility simultaneously: for t! 0 : (t)! 0; (t)! 0: Mizuno et al. [] also discussed in detail the boundary behavior of central surface for primaldual type infeasible interiorpoint algorithms. Very recently, Xu et al. [7] proposed a simplied version of the HSD Algorithm of Ye et al. [8]. The algorithm deals with a homogeneous and selfdual linear feasibility model (HLF ) Ax?b = 0;?A T y +c 0; b T y?c T x 0; () y free x 0 0: Denote by z the slack vector for the second (inequality) constraint and by the slack scalar for the third (inequality) constraint. Then, the problem is to nd a strictly complementary point such that x T z = 0 and = 0: 2
4 The kth iteration of the HSD algorithm solves the following system of linear equations for direction (d y ; d x ; d ; d z ; d ) and A d x?b d = r k P ;?A T d y +c d?d z =? r k D ; b T d y?c T d x?d = r k G ; (2) X k d z + Z k d x = k e? X k z k ; k d + k d = k? k k ; (3) where > 0, are scalar parameters, and k = ((x k ) T z k + k k )=(n + ); (4) r k P = b k? Ax k ; r k D = c k? A T y k? z k ; r k G = c T x k? b T y k + k : (5) Xu et al. [7] showed that if we set =? in each iteration, then the algorithm becomes the HSD algorithm of Ye et al. [8], which follows a path fq( 0 t; 0 t) : 0 t g on the central surface. More precisely, Xu et al. [7] set ) k 0 k ( 0 O( 0 0 ): The limit points of these paths are strictly complementary point for (HLF), according to Mizuno et al. []. If setting <? or >? at each iteration, then the algorithm generates iterates converging to the allzero solution or diverging, respectively. In this paper by introducing a simple update, we generalize the HSD algorithm of Xu et al. [7] so that a strictly complementary solution is obtained even when 6=?. In section 3, we prove that the generalized algorithm achieves O( p n (?) L)iteration complexity by following certain central path on the central surface in a neighborhood N (), where can be any number 2 (0; ). By setting = 0, we get an interesting algorithm in which the searching direction is obtained by solving a Newton equation system without infeasible residual terms on its right hand side, as rst proposed by de Ghellinck and Vial [2] and later by Nesterov [3]. This approach obviously saves the computation of these residual terms. 2 Generalized HSD algorithms Generic HSD algorithm Given initial point y 0 ; x 0 > 0; 0 > 0; z 0 > 0; 0 > 0, k 0. While ( stopping criteria not satised ) do. Let r k P = b k? Ax k ; r k D = c k? A T y k? z k ; r k G = ct x k? b T y k + k : 3
5 2. Solve (2) and (3) for d y ; d x ; d ; d z ; d. 3. Let x = d x + (?? )x k ; y = d y + (?? )y k ; z = d z + (?? )z k ; (6) = d + (?? ) k ; = d + (?? ) k : 4. Choose a step size k > 0 and update x k+ = x k + k x > 0; y k+ = y k + k y ; z k+ = z k + k z > 0; (7) k+ = k + k > 0; k+ = k + k > 0: 5. k k +. Note that we have X k z + Z k x = X k d z + Z k d x + (?? )(X k z k + Z k x k ) = k e? X k z k + 2(?? )X k z k = k e? (2 + 2? )X k z k ; (8) k + k = k? (2 + 2? ) k k : Similar to the proofs in Xu et al. [7], we rst establish the following lemmas: Lemma. The direction resulting from (6) satises ( x ) T z + = 0: (9) Proof. Xu et al. [7] established following result for the solution of system (2) and (3): d T x d z + d d = (?? )(n + ): Thus ( x ) T z + = d T x d z + d d + (?? )((x k ) T d z + (z k ) T d x + k d + k d )+ (?? ) 2 ((x k ) T z k + k k ) = [(?? ) + (?? )(? ) + (?? ) 2 ](n + ) = 0: Q.E.D. 4
6 Lemma 2. The generic algorithm generates f k g and f k g satisfying 0 = [(x 0 ) T z ] = (n + ); k+ = ( + k (?? 2)) k (0) and 0 = ; k+ = ( + k (?? 2)) k () such that r k P = k r 0 P ; rk D = k r 0 D ; rk G = k r 0 G : Proof. By (9) and (8), we have k+ = [(x k+ ) T z k+ + ( k+ ) T k+ ] = (n + ) By (2), (6) and (7), we also have = f [(x k ) T z k + k k ] + k [(x k ) T z + ( x ) T z k + k + k ] g = (n + ) = [ + k (? 2? 2 + )] k = [ + k (?? 2)] k : Similarly, we have this relation for r k+ D r k+ P = [( k + k )b? A(x k + k x )] = [r k P + k ( b? A x )] = [ + k (?? 2)]r k P : and rk+ G as well. From Lemma 2, for any choice of and, our algorithm ensures k and k having a nice xed ratio Q.E.D. k = k = 0 = 0 : The nonnegativity of (x k+ ; k+ ; z k+ ; k+ ) results in k+ 0, which implies that the step size k must satisfy + k (?? 2) 0: Therefore, letting + 2? > 0; (2) yields 0 + k (?? 2) < : According to Mizuno et al. [], we have the following corollary. 5
7 Corollary 3. If the generic algorithm generates f(y k ; x k ; k ; z k ; k )g satisfying k! 0 and min[min(x k i zk i ); k k ] k i for certain > 0, then every limit point of the sequence is a strictly complementary solution of (HLF). p 3 n O( (?) L)iteration HSD algorithms For (HLF) the two dimensional central surface and its neighborhood are dened as r >< Q(; ) = (y; x > 0; > 0; z > 0; > 0) Xz P r P 0 >= A = e; B D C A = B r D C A >: r G rg 0 >; r >< N () = (y; x > 0; > 0; z > 0; > 0) : Xz P r P 0 >= A? ek ; B D C A = B r D C A >: r G rg 0 >; for some 2 (0; ), respectively. Theorem 4. For a given 0 < < and (y k ; x k ; k ; z k ; k ) 2 N (), if k minf 2 + 2? ; 2(? ) [( + 2? ) p n + = + (2 + 2? )] 2 g (3) then 0 Xk+ z k+ k+ k+ A? k+ ek k+ : Proof. To simplify the notation, we use x and z to represent ( x ) and ( z ). Therefore kek = p n +. Note that this notation is only employed in the proof of this theorem. As usual, the capital expression denotes the diagonal matrix of a vector. Thus x = diag( x ), z = diag( z ). Consider kx k+ z k+? k+ ek = kx k z k + k (X k z + Z k x ) + ( k ) 2 x z? [ + k (?? 2)] k ek kx k z k + k (X k z + Z k x )? [ + k (?? 2)] k ek + ( k ) 2 k x z k = j? k (2 + 2? )j kx k z k? k ek + ( k ) 2 k x z k: 6
8 Using T x z = 0, we have k x z k = k[(x k )? Z k ] =2 x [X k (Z k )? ] =2 z ek 2 k[(xk )? Z k ] =2 x e + [X k (Z k )? ] =2 z ek 2 = 2 k(xk Z k )?=2 (Z k x + X k z )k 2 2 mini xizi kzk x + X k z k 2 = 2 mini xizi kk e? (2 + 2? )X k z k k 2 2 mini xizi [k( + 2? )k ek + k(2 + 2? )(X k z k? k e)k] 2 ( k ) 2 2 mini x k i zk [( + 2? ) p n + + (2 + 2? )] 2 : i By min i x k i zk i (? )k ; we have kx k+ z k+? k+ ek j? k (2 + 2? )j k + (k ) 2 ( k ) 2 [( + 2? ) p n + + (2 + 2? )] 2 2 mini x k i zk i = fj? k (2 + 2? )j + (k ) 2 2(?) [( + 2? )p n + = + (2 + 2? )] 2 g k : Again, Lemma 2 tells us k+ = [? k ( + 2? )] k : (4) Therefore, kx k+ z k+? k+ ek k+ if or j? k (2 + 2? )j + (k ) 2 2(? ) [( + 2? )p n + = + (2 + 2? )] 2 [? k ( + 2? )]; 2(? ) [( + 2? )p n + = + (2 + 2? )] 2 ( k ) 2 [? k ( + 2? )]? j? k (2 + 2? )j: If we further assume then it becomes k Thus we have proved the theorem.? k (2 + 2? ) 0; 2(? ) [( + 2? ) p n + = + (2 + 2? )] 2 : Q.E.D. Using the simple continuarity argument ([]), we see from Theorem 4 that, as long as the step size k satises (3), the resulting point is still in the neighborhood of the central path (y k+ ; x k+ ; k+ ; z k+ ; k+ ) 2 N (). 7
9 Let us now consider the following optimization problem for a given 0 < < minimize k+ = k subject to k ; ; satisfy (2); (3): (5) Setting step size according to (3), we obtain k+ = k =? k ( + 2? ) =? minf 2+2? ; Letting! = + 2?, we can rewrite problem (5) as (0 < < ) minimize? minf +! ; subject to! > 0; > 0: By setting! =!, problem (5) further becomes 2(?) [(+2?) p n+=+(2+2?)] 2 g ( + 2? ): 2(?) (! p n+=++!) 2 g! minimize? minf! +! ; subject to! > 0: 2(?)! [( p n+=+)!+] 2 g (6) It is easy to verify that the problem minimize? 2(?)! [( p n+=+)!+] 2 has the optimal value with the optimizer?! =? 2( p n + + ) p n + = + : This implies that and satisfy Therefore, the optimal solution of (6) is clearly bounded by maxf? p n ;? = ( p n + = + )( + 2? ): (7)? 2( p n + + ) g k+ = k?? 2( p n + + ) : (8) Above analysis points out that for a given 0 < <, using (3), the best reduction rate for that the algorithm can achieve is? O( p n ). This results in the O( p nl)iteration complexity. It also implies that a better complexity in worst case is very hard to achieve if the 2norm neighborhood is used. By setting and according to (7), the reduction rate is? p n++2 when is near 0, or? near, respectively. Therefore, the algorithm achieves O( p n (?) L)iteration complexity.? 2( p n++) when is A simple choice (y 0 = 0; x 0 = e; 0 = ; z 0 = e; 0 = ) ensures that the initial point is Q(; ) on the central surface. In summary, we have the following theorem. 8
10 Theorem 5. Let (LP) have integer data with a total bit length L. Then, (HLF) has integer data with a bit length O(L). Furthermore, let 0 < < and (y 0 ; x 0 ; 0 ; z 0 ; 0 ) 2 N () (for instance (y 0 ; x 0 ; 0 ; z 0 ; 0 ) = (0; e; ; e; )) and set = ( p n + = + )( + 2? ) > 0 p k n + + = minf ( p n + + 2) ;? g 2 > 0: Then, the generalized HSD algorithm generates a strictly complementary optimal solution of (HLF) in O( p n (?) L) iterations. As showed in Goldman and Tucker [3][5], Ye et al. [8], we have the following corollary. Corollary 6. The algorithm specied in Theorem 5 obtains a strictly complementary optimal solution of (LP) and (LD) or detects infeasibility of either (LP) or (LD) in O( p n (?) L) iterations. 4 An HSD algorithm In this section, we consider a special case of the generalized HSD algorithms. Let = 0. The modied Newton equation system becomes A d x?b d = 0;?A T d y +c d?d z = 0; b T d y?c T d x?d = 0; (9) Z k d x +X k d z = k e? X k z k ; k d + k d = k? k k : We observe that r P ; r D ; r G disappear in (9). From (2), we have to set >. Thus, we can avoid computing these residuals. Designing algorithms basing on the homogeneous and selfdual linear feasibility model (HLF) seems to have exploited the special properties of linear programming better than on original model. We observe that the introduction of the homogeneous variable and the updating of solutions (6) plays an important role in this algorithm. It makes the feasible and infeasible interiorpoint algorithms with no dierence at all. From Lemma 2, clearly, a large is desired for this new algorithm, since is xed at zero now. In practice, we can make use of a predictorcorrector strategy to choose a very large in each iteration, similar to Xu et al. [7] where they used the strategy to choose a very small. 9
11 References [] D. A. Bayer and J. C. Lagarias, \The nonlinear geometry of linear programming: I. Ane and projective scaling trajectories, II. Legendre transform coordinates and central trajectories," Transactions of the American Mathematical Society 34 (989) [2] G. de Ghellinck and J.P. Vial \A polynomial Newton method for linear programming," Algorithmica (986) [3] A. J. Goldman and A. W. Tucker, \Polyhedral convex cones," in: H. W. Kuhn and A. W. Tucker eds., Linear Inequalities and Related Systems (Princeton University Press, Princeton, NJ, 956) [4] M. Kojima, S. Mizuno, and A. Yoshise, \A primaldual interior point algorithm for linear programming," in: N. Megiddo, ed., Progress in Mathematical Programming, Interior Point and Related Methods (SpringerVerlag, New York, 989) [5] M. Kojima, S. Mizuno, and A. Yoshise, \A polynomialtime algorithm for a class of linear complementarity problems," Mathematical Programming 44 (989) 26. [6] M. Kojima, N. Megiddo, and S. Mizuno, \A primaldual infeasibleinteriorpoint algorithm for linear programming," Mathematical Programming 6 (993) [7] I. J., Lustig, R. E. Marsten, and D. F. Shanno, \Computational experience with a primaldual interior point method for linear programming" Linear Algebra and Its Applications 52 (99) [8] N. Megiddo, \Pathways to the optimal set in linear programming," in: N. Megiddo, ed., Progress in Mathematical Programming, Interior Point and Related Methods (SpringerVerlag, New York, 988) [9] S. Mehrotra \On the implementation of a (primaldual) interior point method," SIAM Journal on Optimization 2 (992) [0] S. Mizuno, \Polynomiality of infeasible interior point algorithms for linear programming," Mathematical Programming 67 (994) [] S. Mizuno, M. J. Todd, and Y. Ye, \A surface of analytic centers and infeasibleinteriorpoint algorithms for linear programming," Technical Report, School of Operations Research and Industrial Engineering, Cornell University, (Ithaca, New York, 992), to appear in Mathematics of Operations Research. [2] R. C. Monteiro and I. Adler, \Interior path following primaldual algorithms, part I: linear programming," Mathematical Programming 44 (989)
12 [3] Yu. Nesterov, \Longstep strategies in interiorpoint potential reduction methods," Central Economical and Mathematical Institute, Russian Academy of Science, (Moscow, Russia, 993). [4] F. A. Potra, \An infeasible interiorpoint predictorcorrector algorithm for linear programming," Report No. 26, Department of Mathematics, University of Iowa (Iowa City, IA, 992), to appear in SIAM Journal on Optimization. [5] A. W. Tucker, \Dual systems of homogeneous linear relations," in: H. W. Kuhn and A. W. Tucker eds., Linear Inequalities and Related Systems (Princeton University Press, Princeton, NJ, 956) 38. [6] S. J. Wright, \A pathfollowing infeasibleinteriorpoint algorithm for linear complementarity problems," Optimization Methods and Software 2 (993) [7] X. Xu, P. F. Hung, and Y. Ye, \A simplied homogeneous and selfdual linear programming Algorithm and Its Implementation," College of Business Administration, The University of Iowa (Iowa City, IA, 993). [8] Y. Ye, M. J. Todd, and S. Mizuno, \An O( p nl)iteration homogeneous and selfdual linear programming algorithm," Mathematics of Operations Research 9 (994) [9] Y. Zhang, \On the convergence of a class of infeasible interiorpoint methods for the horizontal linear complementarity problem," SIAM Journal on Optimization 4 (994)
from the primaldual interiorpoint algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise
1. Introduction The primaldual infeasibleinteriorpoint algorithm which we will discuss has stemmed from the primaldual interiorpoint algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro
More information1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye
Probabilistic Analysis of an InfeasibleInteriorPoint Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider
More informationA SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization
A SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationEnlarging neighborhoods of interiorpoint algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interiorpoint algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wideneighborhood interiorpoint algorithm
More informationAn O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arcsearch
More informationLecture 10. PrimalDual Interior Point Method for LP
IE 8534 1 Lecture 10. PrimalDual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationAn Infeasible InteriorPoint Algorithm with fullnewton Step for Linear Optimization
An Infeasible InteriorPoint Algorithm with fullnewton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPAInstituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de JaneiroRJ CEP 22460320 Brasil
More informationAN INTERIOR POINT METHOD, BASED ON RANKONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANKONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationA fullnewton step feasible interiorpoint algorithm for P (κ)lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A fullnewton step feasible interiorpoint algorithm for P κ)lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationLecture 5. Theorems of Alternatives and SelfDual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and SelfDual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationOn Superlinear Convergence of Infeasible InteriorPoint Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible InteriorPoint Algorithms for
More informationChapter 6 InteriorPoint Approach to Linear Programming
Chapter 6 InteriorPoint Approach to Linear Programming Objectives: Introduce Basic Ideas of InteriorPoint Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationA Strongly Polynomial Simplex Method for Totally Unimodular LP
A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of WisconsinMadison May 2014 Wright (UWMadison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More information1 Outline Part I: Linear Programming (LP) InteriorPoint Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using InteriorPoint Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationA FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM FOR P (κ)horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationNumerical Comparisons of. PathFollowing Strategies for a. Basic InteriorPoint Method for. Revised August Rice University
Numerical Comparisons of PathFollowing Strategies for a Basic InteriorPoint Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPCTR97777S Revised August 1998
More informationInteriorPoint Methods
InteriorPoint Methods Stephen Wright University of WisconsinMadison Simons, Berkeley, August, 2017 Wright (UWMadison) InteriorPoint Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO  2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semidefinite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semidefinite programming problem is to find a matrix X M n for the optimization
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of WisconsinMadison. IMA, August 2016 Stephen Wright (UWMadison) Linear Programming: Simplex IMA, August 2016
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationA priori bounds on the condition numbers in interiorpoint methods
A priori bounds on the condition numbers in interiorpoint methods Florian Jarre, Mathematisches Institut, HeinrichHeine Universität Düsseldorf, Germany. Abstract Interiorpoint methods are known to be
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 14, Appendices) 1 Separating hyperplane
More informationA Second FullNewton Step O(n) Infeasible InteriorPoint Algorithm for Linear Optimization
A Second FullNewton Step On Infeasible InteriorPoint Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationOn MehrotraType PredictorCorrector Algorithms
On MehrotraType PredictorCorrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotratype predictorcorrector algorithms. We consider
More informationAPPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract
APPROXIMATING THE COMPLEXITY MEASURE OF VAVASISYE ALGORITHM IS NPHARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the
More informationA Redundant KleeMinty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant KleeMinty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant KleeMinty constructions,
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationAbstract. A new class of continuation methods is presented which, in particular,
A General Framework of Continuation Methods for Complementarity Problems Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1990 Abstract. A new class of continuation methods is presented which,
More informationA New Class of Polynomial PrimalDual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial PrimalDual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationIMPLEMENTING THE NEW SELFREGULAR PROXIMITY BASED IPMS
IMPLEMENTING THE NEW SELFREGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELFREGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment
More informationOn the behavior of Lagrange multipliers in convex and nonconvex infeasible interior point methods
On the behavior of Lagrange multipliers in convex and nonconvex infeasible interior point methods Gabriel Haeser Oliver Hinder Yinyu Ye July 23, 2017 Abstract This paper analyzes sequences generated by
More informationOn implementing a primaldual interiorpoint method for conic quadratic optimization
On implementing a primaldual interiorpoint method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationResearch Note. A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88107 Research Note A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization B. Kheirfam We
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationIntroduction to optimization
Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)
More informationA path following interiorpoint algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 3558 JMM A path following interiorpoint algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationRoom 225/CRL, Department of Electrical and Computer Engineering, McMaster University,
SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMALDUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHIQUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear
More information58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationImproved FullNewton Step O(nL) Infeasible InteriorPoint Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s1095700996340 Improved FullNewton Step OnL) Infeasible InteriorPoint Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationLocal Selfconcordance of Barrier Functions Based on Kernelfunctions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 123 Local Selfconcordance of Barrier Functions Based on Kernelfunctions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationSome Perturbation Theory. James Renegar. School of Operations Research. Cornell University. Ithaca, New York October 1992
Some Perturbation Theory for Linear Programming James Renegar School of Operations Research and Industrial Engineering Cornell University Ithaca, New York 14853 email: renegar@orie.cornell.edu October
More informationInfeasible PrimalDual (PathFollowing) InteriorPoint Methods for Semidefinite Programming*
Infeasible PrimalDual (PathFollowing) InteriorPoint Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.32.5, 3.13.4) 1 Geometry of Linear
More informationA PREDICTORCORRECTOR PATHFOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 3551 DOI: 10.2298/YJOR120904016K A PREDICTORCORRECTOR PATHFOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA PRIMALDUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMALDUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationKey words. linear complementarity problem, noninteriorpoint algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NONINTERIORPOINT ALGORITHM FOR P 0 LCPS YUNBIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new noninteriorpoint
More informationPrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method
PrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationPrimaldual IPM with Asymmetric Barrier
Primaldual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primaldual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationOn Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming
On Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos email: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationA WIDE NEIGHBORHOOD PRIMALDUAL INTERIORPOINT ALGORITHM WITH ARCSEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMALDUAL INTERIORPOINT ALGORITHM WITH ARCSEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationMore FirstOrder Optimization Algorithms
More FirstOrder Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec  Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec  Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationA fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function
A fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interiorpoint algorithm with
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /
More informationA fullnewton step infeasible interiorpoint algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A fullnewton step infeasible interiorpoint algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationON THE ARITHMETICGEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETICGEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationA class of Smoothing Method for Linear SecondOrder Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 94 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear SecondOrder Cone Programming Zhuqing Gui *, Zhibin
More informationPrimalDual InteriorPoint Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.
PrimalDual InteriorPoint Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. 1. page xviii, lines 1 and 3: (x, λ, x) should be (x, λ, s) (on both lines) 2. page 6, line
More informationAnalytic Center CuttingPlane Method
Analytic Center CuttingPlane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cuttingplane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower
More informationinforms DOI /moor.xxxx.xxxx c 200x INFORMS
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 2x, pp. xxx xxx ISSN 364765X EISSN 526547 x xxx xxx informs DOI.287/moor.xxxx.xxxx c 2x INFORMS A New Complexity Result on Solving the Markov
More informationLOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD
Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 13 line supporting sentence
More informationIBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel
and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 951206099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun AlJeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.570 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interiorpoint method for the P matrix linear complementarity problem
More informationImproved FullNewtonStep Infeasible Interior Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 42016 Improved FullNewtonStep Infeasible Interior Point
More informationA Simpler and Tighter Redundant KleeMinty Construction
A Simpler and Tighter Redundant KleeMinty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant KleeMinty examples, we have previously shown that the central
More informationProperties of a Simple Variant of KleeMinty s LP and Their Proof
Properties of a Simple Variant of KleeMinty s LP and Their Proof Tomonari Kitahara and Shinji Mizuno December 28, 2 Abstract Kitahara and Mizuno (2) presents a simple variant of Klee Minty s LP, which
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised October 2017 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More information15780: LinearProgramming
15780: LinearProgramming J. Zico Kolter February 13, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.14.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationAn E cient A nescaling Algorithm for Hyperbolic Programming
An E cient A nescaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationא K ٢٠٠٢ א א א א א א E٤
المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. AlHomidan and
More informationConvex Optimization : Conic Versus Functional Form
Convex Optimization : Conic Versus Functional Form Erling D. Andersen MOSEK ApS, Fruebjergvej 3, Box 16, DK 2100 Copenhagen, Blog: http://erlingdandersen.blogspot.com Linkedin: http://dk.linkedin.com/in/edandersen
More informationIMPLEMENTATION OF INTERIOR POINT METHODS
IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial
More informationLecture 14: PrimalDual InteriorPoint Method
CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: PrimalDual InteriorPoint Method Disclaimer: Please tell me any mistake you noticed. In this and
More informationPrimaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization
Primaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationApproximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko
Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationDetecting Infeasibility in InfeasibleInteriorPoint Methods for Optimization
Detecting Infeasibility in InfeasibleInteriorPoint Methods for Optimization M. J. Todd January 16, 2003 Abstract We study interiorpoint methods for optimization problems in the case of infeasibility
More informationthat nds a basis which is optimal for both the primal and the dual problems, given
On Finding Primal and DualOptimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITAMAT2009OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm YiChih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationA tight iterationcomplexity upper bound for the MTY predictorcorrector algorithm via redundant KleeMinty cubes
A tight iterationcomplexity upper bound for the MTY predictorcorrector algorithm via redundant KleeMinty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University
More information