Operations Research Lecture 4: Linear Programming Interior Point Method
|
|
- Dana Sharp
- 6 years ago
- Views:
Transcription
1 Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan School, Nanjing University April 14th The affine scaling algorithm one of the most efficient and yet simple interior point algorithms for solving the LP problem. The standard form of LP and its dual c T x x 0 max p T b p T A c T Let P = {x AX = b, x 0} be the feasible set. {x P x > 0} is called the interior of P, and its elements are called interior points. It is difficult to imize c T x over P, but it is rather easy to optimize the same cost function over an ellipsoid, and the solution can be obtained in close form. So we can solve a series of optimization problems over ellipsoids: Starting with x 0 > 0, an interior point of P, we form an ellipsoid S 0 centered at x 0, which is contained in the interior of P. Then we optimize c T x over all x S, and find a new interior point x 1. We draw another ellipsoid S 1 centered at x 1 and proceed in a similar way. As illustrated in Figure ( 1). The first step: create an ellipsoid centered at a given y, such that all elements of the ellipsoid are positive Lemma 1. Let β (0, 1),y R n and y > 0, let Y = diag(y 1,..., y n ), n S = {x R n (x i y i ) 2 i=1 y 2 i β 2 } = {x R n Y 1 (x y) β} Then x > 0 for every x S Proof: (sip) The set S 0 = S {x } is a section of the ellipsoid S, and is itself an ellipsoid contained in the feasible set. 1
2 Figure 1: Affine Scaling Algorithm The second step: imize over the ellipsoid S 0 c T x Y 1 (x y) β let d = x y, then Ad = 0, since of Ay = b and. The above problem becomes The closed form solution of this problem is: c T d Ad = 0 Y 1 d β Lemma 2. Assume that the rows of A are linearly independent and that c is not a linear combination of the rows of A. Let y be a positive vector. Then, an optimal solution d to problem is given by d = β Y2 (c A T p) Y(c A T p) where p = (AY 2 A T ) 1 AY 2 c. Furthermore, the vector x = y + d belongs to P and c T x = c T y β Y(c A T p) < c T y Proof: General idea: first show p is well defined; then d is feasible solution to the problem by validating d satisfies constrains; to prove the optimality of d, compare c T d and c T d (d is a feasible solution), here the Schwartz inequality is used. An interpretation of the formula for p: if y is a nondegenerate basic feasible solution, by some calculation, p = (B T ) 1 c B, which is the corresponding dual basic solution. So we call p the dual estimates even if y is not a basic feasible solution. r = c A T p becomes r = c A T (B T ) 1 c B, the reduced cost in the simplex method. r T y = (c A T p) T y = c T y p T Ay = c T y p T b, the difference in objective values between the primal solution y and the dual solution p, which is called duality gap 2
3 Lemma 3. Let y and p be a primal and a dual feasible solution, respectively, such that c T y b T p < ɛ. Let y and p be optimal primal and dual solutions, respectively. Then c T y c T y < c T y + ɛ This is called ɛ-optimization. b T p ɛ < b T p b T p Proof: Since y is a primal feasible solution and y is an optimal primal solution, c T y c T y. By wea duality, b T p c T y. We have c T y < b T p + ɛ c T y + ɛ Similarly, we obtain b T p = c T y c T y < b T p + ɛ The affine scaling algorithm 1. (Initialization) Start with some feasible x 0 > 0, let = 0 2. (Computation of dual estimates and reduced costs) Given some feasible x > 0, let X = diag(x 1,..., x n) p = (AX 2 A T ) 1 AX 2 c r = c A T p 3. (Optimality chec) Let e = (1, 1,..., 1). If r 0 and e T X r < ɛ, then stop; the current solution x is primal ɛ-optimal and p is dual ɛ-optimal. 4.(Unboundedness chec) If X 2 r 0 then stop; the optimal cost is. 5.(Update of primal solution) Let, here 0 < β < 1 x +1 = x β X2 r X r This version is called short-step method. There are some long-step methods: or here u = max i u i and γ(u) = max{u i u i > 0} X2 r x +1 = x β X r X2 r x +1 = x β γ(x r ) Convergence: Under certain assumption, the affine scaling algorithm converge to the optimal primal and dual solutions respectively. Initialization: We need an interior feasible solution, by solving the augmented problem c T x + Mx n+1 Ax + (b Ae)x n+1 = b (x, x n+1 ) 0 3
4 where M is a large positive scalar. Notice that (x, x n+1 ) = (e, 1) is a positive feasible solution, and the affine scaling algorithm can be applied. If M is very large, the augmented problem will provide an optimal solution to the original problem. Computational performance: If the current point is near the boundary of the feasible set, the algorithm taes small steps. But if the current point lie deep inside the feasible set, the algorithm maes rapid progress. 2 The primal path following algorithm The primal path following algorithm offers interesting geometric insights, and has been incorporated in large-scale implementation. The standard form of LP and its dual c T x x 0 max p T b p T A c T The core difficulty for solving LP problem is the presence of the inequality constrains x 0. So we can convert the LP problem to a problem with only equality constrains using barrier function, which prevents any variable from reaching the boundary: n B µ (x) = c T x µ log x j where µ > 0 For the barrier problems j=1 B µ x (1) Given a value of µ, the barrier problem has an optimal solution x(µ). As µ varies, x(µ) form the central path. When µ 0, x(µ) x, the optimal solution of the initial LP problem. The reason is: when µ is very small, the logarithmic term is negligible. A barrier problem from the dual problem is max p T b + µ n log s j j=1 p T A + s T = c T When µ =, the optimal solution of the problem ( 1) is called the analytic center of the feasible set. As illustrated in Figure ( 2) 4
5 Figure 2: The central path and the analytic center Example 4. (Computation of the central path) Consider the problem The barrier problem is Substituting x 3 = 1 x 1 x 2, we have x 2 x 1 + x 2 + x 3 = 1 x 1, x 2, x 3 0 x 2 µ log x 1 µ log x 2 µ log x 3 x 1 + x 2 + x 3 = 1 x 2 µ log x 1 µ log x 2 µ log (1 x 1 x 2 ) By taing derivatives, and setting them equal to zero, we find the central path is given by x 1 (µ) = 1 x2(µ) 2 x 2 (µ) = 1+3µ 1+9µ 2 +2µ 2 x 3 (µ) = 1 x2(µ) 2 The analytic center can be found by letting µ, which is (1/3, 1/3, 1/3). And the central path can be got when µ taes value from 0 to. As illustrated in Figure ( 3). Figure 3: The central path and the analytic center in the example 5
6 The barrier problem is still difficult to solve, since the objective function is neither linear nor quadratic. Here, we approximate the barrier function around some given x by its Taylor series expansion. B µ (x + d) B µ (x) + n i=1 B µ (x) x i d i Then the approximating problem becomes: n i,j=1 2 B µ (x) d i d j = B µ (x) + (c T µe T X 1 )d + 1 x i x j 2 µdt X 2 d (c µe T X 1 )d µdt X 2 d Ad = 0 This problem can be solved in closed form using the Lagrange multipliers method: associate a vector p of Lagrange multipliers to the constrains Ad = 0 and form the Lagrangean function L(d, p) = (c T µe T X 1 )d µdt X 2 d p T Ad By taing derivatives, and setting them equal to zero, we get Solving this system of equations, c µx 1 e + µx 2 d A T p = 0 Ad = 0 (2) d(µ) = (I X 2 A T (AX 2 A T ) 1 A)(Xe 1 µ X2 c) p(µ) = (AX 2 A T ) 1 A(X 2 c µxe) (3) d(µ) is called the Newton direction. Now, x moves to x + d(µ), and the dual solution becomes (p(µ), c A T p(µ)) Geometric interpretation: With fixing µ, we carry out several Newton steps, x would converge to x(µ). We then reduce µ to µ = αµ, here 0 < α < 1. While α is close to 1, then x( µ) is close to x(µ). So x is close to x( µ). By carrying out a Newton step, x moves even closer to x( µ). That is the feasible solution x approximately follows the central path, so this method is called path following. As illustrated in Figure ( 4). Figure 4: Geometric interpretation The primal path following algorithm 1.(Initialization) Start with some primal and dual feasible x 0 > 0, s 0 > 0, p 0, and set = 0 2.(Optimality test) If (s ) T x < ɛ stop; else go to Step 3. 6
7 3. Let 4. (Computation of directions) Solve the linear system X = diag(x 1,.., x n), µ +1 = αµ. µ +1 X 2 d AT p = µ +1 X 1 e c Ad = 0 (4) for p and d 5. (Update of solutions) Let 6. Let := + 1 go to step 2 x +1 = x + d p +1 = p s +1 = c A T p 3 The primal-dual path following algorithm This method approximates the central path by taing Newton steps, and finds Newton directions both in the primal and the dual space. It has excellent performance in large scale applications. The barrier optimization problems and c T x µ n log x j j=1 max p T b + µ n log s j j=1 (6) p T A + s T = c T Similar to the complementary slacness for a LP problem, the following conditions are sufficient and necessary for an optimal solution to the above problems (called KKT optimization conditions, to be covered in the following lectures). Ax(µ) = b x(µ) 0 A T p(µ) + s(µ) = c s(µ) 0 X(µ)S(µ)e = eµ This system of equation is nonlinear, and difficult to solve directly. Here the Newton s method is used to solve it interactively. Newton s method for finding roots of nonlinear equations For finding a z such that F(Z ) = 0, here F is a mapping from R r into R r. Assug z is an approximation for z. The following method is used to improve the approximation. Expand F(z) around z using a first order multivariable Taylor series F(z + d) F(z ) + J(z )d (5) (7) 7
8 Here J(z ) is the Jacobian matrix. We loo for some d that satisfies F(z ) + J(z )d = 0 (8) Then set z +1 = z + d. d is called a Newton direction. The method converges quicly to z under certain conditions. Application of Newton s method to LP Use Newton s method to solve the system of nonlinear equations ( 7), here z = (x, p, s) and Ax b F(z) = A T p + s c XSe µe Let (x, p, s ) be the current primal and dual feasible solution, with x > 0 and s > 0. By applying Eq. ( 8), the Newton direction is detered by: A 0 0 d 0 A T x Ax b I d p = A T p + s c S 0 X d s X S e µe Since Ax = b and A T p + s = c, this is equivalent to Ad x = 0 (9) A T d p + d s = 0 (10) S d x + X d s = µ e X S e (11) The solution to this linear system is given by where Step lengths d x = D (I P )v (µ ) d p = (AD 2 A T ) 1 AD v (µ ) d s = D 1 P v (µ ) D 2 = X S 1 P = D A T (AD 2 A T ) 1 AD v (µ ) = X 1 D (µ e X S e) The new solution is updated by: x +1 = x + β P d x p +1 = p + β D d p s +1 = s + β D d s where βp and β D are step lengths for the primal and dual variables. For preserving nonnegativity of x+1 and s +1, we tae βp = {1, α ( x i )} {i (d x )i<0} (d x )i βd = {1, α ( s i {i (d s )i<0} (d s )i )} 8
9 where 0 < α < 1 to avoid approaching the boundary of the primal and dual feasible set. The equality constrains of primal and dual problems are also satisfied, because of Eqs.( 9) and ( 10). Updating the barrier parameter µ One way to update µ that has had excellent computational performance is to set µ = ρ (x ) T s n where ρ is typically set to 1, except if the algorithm has not progressed, ρ is set to a value smaller than 1. The primal-dual path following algorithm 1. (Initialization) Start with some feasible x 0 > 0,s 0 > 0, p 0, and set = (Optimality test) If (s ) T x < ɛ stop; else go to Step (Computation of Newton directions) Let ρ (0, 1] and µ = ρ (x ) T s n X = diag(x 1,..., x n), S = diag(s 1,..., s n), Solve the linear system ( 9)-( 11) for d x,d p and d s. 4. (Find step lengths) Let βp = {1, α ( x {i (d x )i<0} βd = {1, α ( s i {i (d s )i<0} (d s 5. (Solution update) Update the solution vectors according to Let := + 1 and go to Step 2. x +1 = x + β P d x p +1 = p + β D d p s +1 = s + β D d s i )} (d x )i Infeasible primal-dual path following methods Without requiring x, s and p necessarily feasible, i.e. Ax b and A T p + s c. The Newton direction is derived in exactly the same manner and requires to solve the following system of equations: A A T I S 0 X d x d p d s = )i )} Ax b A T p + s c X S e µe 4 References 1. Chapter X, Operations Research (Third Version), Tsinghua Unversity Press, Chapter 9, Dimitris Bertsimas, John N. Tsitsilis, Introduction to Linear Programg, Athena Scientific,
10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationOperations Research Lecture 6: Integer Programming
Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationOperations Research Lecture 2: Linear Programming Simplex Method
Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationPrimal-Dual Interior-Point Methods
Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationHow to Take the Dual of a Linear Program
How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationMidterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 218): Convex Optimiation and Approximation Instructor: Morit Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowit Email: msimchow+ee227c@berkeley.edu
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLECTURE 10 LECTURE OUTLINE
LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,
More informationLecture 7 Duality II
L. Vandenberghe EE236A (Fall 2013-14) Lecture 7 Duality II sensitivity analysis two-person zero-sum games circuit interpretation 7 1 Sensitivity analysis purpose: extract from the solution of an LP information
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationIntroduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,
More informationCSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.
SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationFarkas Lemma, Dual Simplex and Sensitivity Analysis
Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationNonlinear optimization
Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11
More informationAlgorithms and Theory of Computation. Lecture 13: Linear Programming (2)
Algorithms and Theory of Computation Lecture 13: Linear Programming (2) Xiaohui Bei MAS 714 September 25, 2018 Nanyang Technological University MAS 714 September 25, 2018 1 / 15 LP Duality Primal problem
More informationToday: Linear Programming (con t.)
Today: Linear Programming (con t.) COSC 581, Algorithms April 10, 2014 Many of these slides are adapted from several online sources Reading Assignments Today s class: Chapter 29.4 Reading assignment for
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationBenders Decomposition
Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)
More informationmin 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,
More informationLecture 16: October 22
0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationConstraint Reduction for Linear Programs with Many Constraints
Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationIntroduction to optimization
Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationPart IB Optimisation
Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More information15.081J/6.251J Introduction to Mathematical Programming. Lecture 21: Primal Barrier Interior Point Algorithm
508J/65J Itroductio to Mathematical Programmig Lecture : Primal Barrier Iterior Poit Algorithm Outlie Barrier Methods Slide The Cetral Path 3 Approximatig the Cetral Path 4 The Primal Barrier Algorithm
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationNote 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)
Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationA Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationLecture 5. x 1,x 2,x 3 0 (1)
Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationDuality revisited. Javier Peña Convex Optimization /36-725
Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More information