Numerical Optimization

Similar documents
Numerical Optimization

CS711008Z Algorithm Design and Analysis

Primal-Dual Interior-Point Methods

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Numerical Optimization

Optimisation and Operations Research

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Interior Point Methods for LP

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Interior Point Methods for Mathematical Programming

More First-Order Optimization Algorithms

On the Number of Solutions Generated by the Simplex Method for LP

Interior Point Methods in Mathematical Programming

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Operations Research Lecture 4: Linear Programming Interior Point Method

10 Karmarkars Algorithm

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

2.098/6.255/ Optimization Methods Practice True/False Questions

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Applications of Linear Programming

Lecture 15: October 15

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

10 Numerical methods for constrained problems

Introduction to integer programming II

Numerical Optimization

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

12. Interior-point methods

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

On the number of distinct solutions generated by the simplex method for LP

Lecture 6: Conic Optimization September 8

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

ICS-E4030 Kernel Methods in Machine Learning

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

A Simpler and Tighter Redundant Klee-Minty Construction

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Theory and Internet Protocols

CO350 Linear Programming Chapter 6: The Simplex Method

Chapter 6 Interior-Point Approach to Linear Programming

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

USE OF THE INTERIOR-POINT METHOD FOR CORRECTING AND SOLVING INCONSISTENT LINEAR INEQUALITY SYSTEMS

Convex Optimization Lecture 13

Math 5593 Linear Programming Week 1

EE364a Review Session 5

Linear Programming: Simplex

Linear Programming. Chapter Introduction

Constrained optimization: direct methods (cont.)

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Interior Point Algorithms for Constrained Convex Optimization

Today: Linear Programming (con t.)

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

LINEAR AND NONLINEAR PROGRAMMING

Lecture 9 Sequential unconstrained minimization

Nonlinear Optimization: What s important?

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 8. Strong Duality Results. September 22, 2008

Polynomiality of Linear Programming

Lecture 18: Optimization Programming

minimize x subject to (x 2)(x 4) u,

Optimization: Then and Now

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

5. Duality. Lagrangian

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Lecture Notes 3: Duality

2.3 Linear Programming

Lecture: Algorithms for LP, SOCP and SDP

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Numerical Optimization

Linear and Combinatorial Optimization

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

11. Equality constrained minimization

Linear Programming. 1 An Introduction to Linear Programming

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Lecture 5. x 1,x 2,x 3 0 (1)

Barrier Method. Javier Peña Convex Optimization /36-725

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

You should be able to...

Written Examination

Algorithms for constrained local optimization

The Ellipsoid (Kachiyan) Method

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Lecture #21. c T x Ax b. maximize subject to

Constrained Optimization and Lagrangian Duality

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Module 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Convex Optimization M2

6.854J / J Advanced Algorithms Fall 2008

Some Results in Duality

Transcription:

Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on

Example 1 Computational Complexity of Simplex Algorithm min 2 n 1 x 1 2 n 2 x 2... 2x n 1 x n s.t. x 1 5 4x 1 + x 2 25 8x 1 + 4x 2 + x 3 125. 2 n x 1 + 2 n 1 x 2 +... + 4x n 1 + x n 5 n x j 0 j = 1,..., n Simplex method, starting at x = 0, would visit all 2 n extreme points before reaching the optimal solution. 1 V. Klee and G.J. Minty, How good is the simplex algorithm?. In O. Shisha, editor, Inequalities, II, pp. 159-175, Academic Press, 1971

min 4x 1 2x 2 x 3 s.t. x 1 5 4x 1 + x 2 25 8x 1 + 4x 2 + x 3 125 x 1, x 2, x 3 0 Iteration Basic Vectors Objective function 1 x 4, x 5, x 6 0 2 x 1, x 5, x 6-20 3 x 1, x 2, x 6-30 4 x 4, x 2, x 6-50 5 x 4, x 2, x 3-75 6 x 1, x 2, x 3-95 7 x 1, x 5, x 3-105 8 x 4, x 5, x 3-125 Simplex Algorithm is not a polynomial time algorithm (number of computational steps grows as an exponential function of the number of variables, rather than as a polynomial function)

Interior Point Methods for Linear Programming Points generated are in the interior" of the feasible region Based on nonlinear programming techniques Some interior points methods: Affine Scaling Karmarkar s Method We consider the linear program, min s.t. c T x Ax = b x 0 where A R m n and rank(a) = m.

Affine Scaling Idea: (1) Use projected steepest descent direction at every iteration Given a feasible interior point x k at current iteration k. Ax k = b, x k 0 Let d denote a direction such that x k+1 = x k + α k d, α k > 0. Therefore, Ax k+1 = b Ad = 0 Projected Steepest Descent Direction Project the steepest descent direction, c, on the null space of A

Let ĉ = c = p c + q where p c Null Space(A). Ap c = 0. q Row Space(A). q = A T λ. Aĉ = AA T λ λ = (AA T ) 1 Aĉ p c = ĉ q = ĉ A T (AA T ) 1 Aĉ = (I A T (AA T ) 1 A)ĉ = (I A T (AA T ) 1 A)c = Pc where P denotes the projection matrix.

Idea: (2) Position the current point close to the centre of the feasible region For example, one possible choice is the point: 1 = (1, 1,..., 1) T Given a point x k in the interior of the feasible region, define X k = diag(x k ). Define the transformation, y = T(x) = X k 1 x. y k = X k 1 x k = 1 or X k y k = x k

Affine Scaling Algorithm: Start with any interior point x 0 while (stopping condition is not satisfied at the current point) Transform the current problem into an equivalent problem in y space so that the current point is close to the centre of the feasible region Use projected steepest descent direction to take a step in the y space without crossing the feasible set boundary Map the new point back to the corresponding point in the x space endwhile

Stopping Condition for an Affine Scaling Algorithm Consider the following primal and dual problems: Primal Problem (P) Dual Problem (D) min c T x max b T µ s.t. Ax = b x 0 s.t. A T µ c For any primal and dual feasible x and µ, At optimality, c T x b T µ c T x b T µ = 0 (Weak Duality) (Strong Duality) Idea: Use the duality gap, c T x b T µ, to check optimality Q. How to get µ?

Consider the primal problem: min s.t. c T x Ax = b x 0 Define the Lagrangian function, L(x, λ, µ) = c T x + µ T (b Ax) λ T x Assumption: x is primal feasible and λ 0 KKT conditions at optimality: x L(x, λ, µ) = 0 A T µ + λ = c λ i x i = 0 i = 1,..., n Defining X = diag(x), the KKT conditions are X(c A T µ) = 0.

Solve the following problem to get µ: min µ Xc XAT µ 2 µ = (AX 2 A T ) 1 AX 2 c Thus, at a given point x k, Duality Gap = c T x k b T µ k where µ k = (AX k2 A T ) 1 AX k2 c and X k = diag(x k ).

Step 1: Equivalent problem formulation to get considerable improvement in the objective function Given x k, define X k = diag(x k ). Define a transformation T as, Therefore, Using this transformation, min s.t. c T x Ax = b, x 0 y = T(x) = X k 1 x X k y k = x k and y k = 1. } min s.t. which can written in standard form as min s.t. c T y Āy = b, y 0 c T X k y AX k y = b, y 0 where c = X k c and Ā = AX k.

Step 2: Find the projected steepest direction and step length at y k for the problem, min s.t. c T y Āy = b, y 0 Given x k, y k = X k 1 x k = 1. The projected direction of c on the null space of Ā is, d k = (I X k A T (AX k2 A T ) 1 AX k )X k c. Let α k (> 0) denotes the step length. y k+1 = y k + α k d k = 1 + α k d k 0 Let α max = min j:d k j <0 1 d k j and set α k =.9 α max. Step 3: x k+1 = X k y k+1

Affine Scaling Algorithm (to solve an LP in Standard Form) (1) Input: A, b, c, x 0, ɛ (2) Set k := 0. (3) X k = diag(x k ) (4) µ k = (AX k2 A T ) 1 AX k2 c (5) while (c T x k b T µ k ) > ɛ (a) d k = (I X k A T (AX k2 A T ) 1 AX k )X k c (b) α k =.9 min j:d k j <0 1 d k j (c) x k+1 = X k (1 + α k d k ) (d) X k+1 = diag(x k+1 ) (e) µ k+1 = (AX k+12 A T ) 1 AX k+12 c (f) k := k + 1 endwhile Output : x = x k

Application of Affine Scaling Algorithm to the problem, min 3x 1 x 2 s.t. x 1 + x 2 2 x 1 1 x 1, x 2 0 x = (1, 1) T Iteration x kt x k x 0 (.6,.6).57 1 (.87,.81).23 2 (.88, 1.00).12 3 (.90,.99).1026 4 (.90, 1.00).1001

2 1.8 1.6 1.4 1.2 x2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 x1

Assumptions: Idea: Karmarkar s Method The problem is in homogeneous form: min c T x s.t. Ax = 0 1 T x = 1 x 0 Optimum objective function value is 0 (1) Use projective transformation to move an interior point to the centre of the feasible region (2) Move along projected steepest descent direction

Projective Transformation: Consider the transformation, T, defined by, Remarks: y = T(x) = Xk 1 x 1 T X k 1 x, x 0 x y (y = ( 1 n,..., 1 n )T, 1 T y = 1) Inverse transformation: x = (1 T X k 1 x)(x k y) 1 T x = (1 T X k 1 x)1 T (X k y). For every feasible x, 1 T x = 1 1 T X k 1 x = 1 1 T X k y T 1 (y) = x = Xk y 1 T X k y

Using the transformation, x = Xk y 1 T X k y min c T x s.t. Ax = 0 1 T x = 1 x 0 min ct X k y 1 T X k y s.t. AX k y = 0 1 T y = 1 y 0 min c T X k y s.t. AX k y = 0 1 T y = 1 y 0 Equivalence based on the assumption: Optimal objective function value is 0 and 1 T X k y > 0

Consider the problem, min c T X k y s.t. AX k y = 0 1 T y = 1 y 0 Step 1: Find a projected steepest descent direction in the y space ( Projection of X k c onto the subspace of {d : AX k d = 0, 1 T d = 0, d 0}) Find d by solving min 1 2 Xk c d 2 s.t. AX k d = 0, projecting it onto the null space of 1 T and ensure d 0.

Consider the problem, min 1 2 Xk c d 2 s.t. AX k d = 0 L(d, µ) = 1 2 Xk c d 2 + µ T Ax k d d L(d, µ) = 0 (X k c d) + X k A T µ = 0 d = X k c X k A T µ 0 = AX k d = AX k2 c AX k2 A T µ AX k2 c = AX k2 A T µ Projection of d on the null space of 1 T : d k = (I 1 n 11T )(X k c X k A T µ)

Karmarkar s Projective Scaling Algorithm (for homogeneous Linear Program) (1) Input: Homogeneous LP, A, c, ɛ (2) Set k := 0, x k = 1 n 1 (3) X k = diag(x k ) (4) while c T x k > ɛ (a) Find the projected steepest descent direction d k (b) y k+1 = 1 n 1 + δ n(n 1) (c) x k+1 = T 1 (y k+1 ) (d) X k+1 = diag(x k+1 ) (e) k := k + 1 endwhile Output : x = x k d k d k (δ = 1 3 )