CS711008Z Algorithm Design and Analysis

Similar documents
Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Lecture 18: Optimization Programming

Interior Point Algorithms for Constrained Convex Optimization

Numerical Optimization

Lecture 6: Conic Optimization September 8

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

12. Interior-point methods

10 Numerical methods for constrained problems

Lecture 8. Strong Duality Results. September 22, 2008

Barrier Method. Javier Peña Convex Optimization /36-725

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

CS711008Z Algorithm Design and Analysis

Interior-Point Methods for Linear Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

12. Interior-point methods

Constrained Optimization and Lagrangian Duality

Convex Optimization and SVM

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Lecture 15: October 15

Support Vector Machines: Maximum Margin Classifiers

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

2.098/6.255/ Optimization Methods Practice True/False Questions

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

minimize x subject to (x 2)(x 4) u,

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Convex Optimization & Lagrange Duality

ICS-E4030 Kernel Methods in Machine Learning

Linear programming II

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Math 5593 Linear Programming Week 1

Linear and Combinatorial Optimization

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

8. Geometric problems

Operations Research Lecture 4: Linear Programming Interior Point Method

Convex Optimization and Support Vector Machine

IE 5531 Midterm #2 Solutions

Convex Optimization and l 1 -minimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Applications of Linear Programming

Theory and Internet Protocols

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Lecture 9 Sequential unconstrained minimization

Primal/Dual Decomposition Methods

Optimisation in Higher Dimensions

CS-E4830 Kernel Methods in Machine Learning

Introduction to Nonlinear Stochastic Programming

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Optimisation and Operations Research

Lecture 16: October 22

Convex Optimization Lecture 13

Lecture: Duality of LP, SOCP and SDP

Support Vector Machines for Regression

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lagrange Relaxation and Duality

Interior Point Methods for LP

SF2822 Applied nonlinear optimization, final exam Wednesday June

A Brief Review on Convex Optimization

SF2822 Applied nonlinear optimization, final exam Saturday December

Lecture: Introduction to LP, SDP and SOCP

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lecture 10: Linear programming duality and sensitivity 0-0

LINEAR AND NONLINEAR PROGRAMMING

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Computational Finance

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Interior Point Methods for Mathematical Programming

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Optimization (168) Lecture 7-8-9

Introduction to optimization

Homework 4. Convex Optimization /36-725

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Lecture 1 Introduction

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Nonlinear Optimization for Optimal Control

Part IB Optimisation

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

5. Duality. Lagrangian

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Optimization: Then and Now

Support vector machines


Topics in Theoretical Computer Science April 08, Lecture 8

8. Geometric problems

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Convex Optimization M2

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Transcription:

CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31

Outline Brief history of interior point method Basic idea of interior point method 2 / 31

A brief history of linear program In 1949, G B Dantzig proposed the simplex algorithm; In 1971, Klee and Minty gave a counter-example to show that simplex is not a polynomial-time algorithm In 1975, L V Kantorovich and T C Koopmans, Nobel prize, application of linear programming in resource distribution; In 1979, L G Khanchian proposed a polynomial-time ellipsoid method; In 1984, N Karmarkar proposed another polynomial-time interior-point method; In 2001, D Spielman and S Teng proposed smoothed complexity to prove the efficiency of simplex algorithm 3 / 31

In 1979, L G Khanchian proposed a polynomial-time ellipsoid method for LP Figure: Leonid G Khanchian 4 / 31

In 1984, N Karmarkar proposed a new polynomial-time algorithm for LP 5 / 31

Basic idea of interior point method 6 / 31

KKT conditions for LP Let s consider a linear program in slack form and its dual problem, ie Primal problem: min c T x st Ax = b x 0 Dual problem: max b T y st A T y c KKT condition: Primal feasibility: Ax = b, x 0 Dual feasibility: A T y c Complementary slackness: x i = 0, or a T i y = c i for any i = 1,, m 7 / 31

Rewriting KKT condition Let s consider a linear program in slack form and its dual problem, ie Primal problem: min c T x st Ax = b x 0 Dual problem: max b T y st A T y c KKT condition: Primal feasibility: Ax = b, x 0; Dual feasibility: A T y + σ = c, σ 0; Complementary slackness: x i σ i = 0 These conditions consists of m + 2n equations over m + 2n variables plus non-negative constraints 8 / 31

Rewrite KKT conditions further Let s define diagonal matrix: x 1 0 0 σ 1 0 0 X = 0 x 2 0, Σ = 0 σ 2 0 0 0 x 3 0 0 σ 3 and rewrite the complementary slackness as: x 1 σ 1 0 0 1 0 x 1 σ 1 XΣe = 0 x 2 σ 2 0 1 = 0 = x 2 σ 2 0 0 x 3 σ 3 1 0 x 3 σ 3 KKT condition: Primal feasibility: Ax = b, x 0; Dual feasibility: A T y + σ = c, σ 0; Complementary slackness: XΣe = 0 We call (x, y, σ) interior point if x > 0 and σ > 0 Question: How to find (x, y, σ) satisfy the KKT conditions? 9 / 31

A simple interior-point method Basic idea: Assume we have already known ( x, ȳ, σ) that are both primal and dual feasible, ie, A x = b, x > 0 A T ȳ + σ = c, σ > 0 and try to improve to another point (x, y, σ ) to make complementary slackness hold Improvement strategy: Starting from ( x, ȳ, σ), we follow a direction ( x, y, σ) such that ( x + x, ȳ + y, σ + σ) is a better solution, ie, it comes closer to satisfying the complementary slackness To find such a direction, we substitute ( x + x, ȳ + y, σ + σ) into the KKT conditions: Primal feasibility: A x + x = b, x + x 0; Dual feasibility: A T (ȳ + y) + ( σ + σ) = c, σ + σ 0; Complementary slackness: ( X + X)( Σ + Σ)e = 0 10 / 31

Since we start from ( x, ȳ, σ) such that A x = b, x > 0 A T ȳ + σ = c, σ > 0, the above conditions change into A x = 0, x + x 0; A T y + σ = 0, σ + σ 0; X Σ + Σ X = X Σe X Σe Note that when x and σ are small, the final non-linear term X Σe is small relative to X Σe and can thus be dropped out, generating a linear system A x = 0, x + x 0; A T y + σ = 0, σ + σ 0; X Σ + Σ X = X Σe The solution are: σ = X 1 ( X Σ Σ x) = σ X 1 Σ x ( ) ( ) ) Σ X 1 A T x ( σ = A 0 y 0 11 / 31

1: Set initial solution x such that A x = b, x > 0; 2: Set initial solution ȳ, σ such that A T ȳ + σ = c, σ > 0; 3: while TRUE do 4: Solve ( X 1 Σ A T A 0 ) ( ) x y = ) ( σ 0 5: Set σ = X 1 ( X Σ Σ x) = σ X 1 Σ x; 6: Calculate θ to make x + θ x 0, and σ + θ σ 0; 7: Update x = x + θ x, ȳ = ȳ + θ y, σ = σ + θ σ; 8: if x i σ i ϵ for all i = 1,, m then 9: break; 10: end if 11: end while 12: return (x, y, σ); 12 / 31

An example min 2x 1 +15x 2 st 12x 1 +24x 2 120 16x 1 +16x 2 120 30x 1 +12x 2 120 x 1 15 x 2 15 x 1 0 x 2 0 13 / 31

An execution Starting point 1: x = (10, 10) x 1 x 2 0 5 10 15 0 5 10 15 14 / 31

Another execution Starting point 2: x = (14, 1) x 1 x 2 0 5 10 15 0 5 10 15 15 / 31

Centered interior-point method To avoid the poor performance, we need a way to keep the iterate away from the boundary until the solution approaches the optimum An efficient way to achieve this goal is to relax complementary slackness: XΣe = 0 into XΣe = 1 (t > 0) t We start from a small t, and gradually increase it as the algorithm proceeds At each iteration, we execute the previous affine interior point method 16 / 31

Running center path algorithm Starting point 1: x = (10, 10) x 1 x 2 0 5 10 15 0 5 10 15 17 / 31

Running center path algorithm Starting point 2: x = (14, 1) x 1 x 2 0 5 10 15 0 5 10 15 18 / 31

Another viewpoint Actually, the relaxed complementary slackness XΣe = 1 t (t > 0) corresponds to the following linear program: min c T x 1 t n i=1 log x i st Ax = b x 0 19 / 31

Consider a general optimization problem Suppose we are trying to solve a convex program with inequality constraints: min f(x) st Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable 20 / 31

Let s start from equality constrained quadratic programming Suppose we are trying to solve a QP with equality constraints : min st 1 2 xt P x + Q T x + r Ax = b Applying Lagrangian conditions, we have Ax = b, and P x + Q + A T λ = 0 Thus, the optimum point x can be solved as follows: [ ] [ ] [ ] P A T x Q = A 0 λ b 21 / 31

Then how to minimize f(x) with equality constraints? Newton s method Suppose we are trying to minimize a convex function f(x), which is not a QP min st f(x) Ax = b Basic idea: let s try to improve from a feasible solution x At x, we write the Taylor extension, and use quadratic approximation to f(x + x) = f(x) + f(x) + 1 2 xt 2 f(x) x min st f(x) A(x + x) = b Thus, the optimum point x can be solved as follows: [ 2 f(x) A T ] [ ] [ ] x f(x) = A 0 λ b Since this is just an approximation to f(x), we need to iteratively perform line search 22 / 31

Then how to minimize f(x) with inequality constraints? Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable Basic idea: 1 Transform it into a series of equality constrainted optimisation problems; 2 Solve each equality constrainted optimisation problem using Newton s method 23 / 31

Log barrier function: transform into equality constraints 24 / 31

Log barrier function Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable This is equivalent to the following optimization problem min f(x) + I_(g(x)) st Ax = b { 0 if x 0 where I_(x) = otherwise But indicator function is not differentiable 25 / 31

Log barrier function: smooth approximation to the indicator function Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable This is equivalent to the following optimization problem min st f(x) 1 t log( g(x)) Ax = b Basic idea: a smooth approximation to indicator function 1 t log( x), and this approximation improves as t 26 / 31

Log barrier function approximates indicator function 1 t log( x) approximates the indicator function, and the approximation improves as t For each setting of t, we obtain an approximation to the original optimisation problem 27 / 31

Interior method and central path Solve a sequence of optimization problem: min st f(x) 1 t log( g(x)) Ax = b For any t > 0, we define x (t) as the optimal solution t increases step by step t should not be too large initially, as it is not easy to solve it using Newton s method Central path: {x (t) t > 0} 28 / 31

Interior point method for LP Consider the following LP min c T x st Ax b x 0 29 / 31

Central path For any t > 0, we define x (t) as the optimal solution to: min f(x) 1 t log( g(x)) st Ax = b x (t) is not optimal solution to the original problem The duality gap is bounded by m t 30 / 31

Interior point algorithm 31 / 31