Lecture 15: October 15

Similar documents
Lecture 14: October 11

The Ellipsoid (Kachiyan) Method

Lecture 16: October 22

The Ellipsoid Algorithm

CS711008Z Algorithm Design and Analysis

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Lecture 24: August 28

Lecture 6: Conic Optimization September 8

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Lecture 14: Newton s Method

Lecture 6: September 12

Lecture 17: Primal-dual interior-point methods part II

Lecture 18: November Review on Primal-dual interior-poit methods

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Lecture 5: September 15

Lecture 20: November 1st

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Numerical Optimization

Constrained Optimization and Lagrangian Duality

Lecture 23: Conditional Gradient Method

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

Lecture 14: October 17

Lecture 10: September 26

Lecture 5: September 12

Optimization (168) Lecture 7-8-9

7. Lecture notes on the ellipsoid algorithm

Primal-Dual Interior-Point Methods

Lecture 6: September 17

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Lecture 9 Sequential unconstrained minimization

CS675: Convex and Combinatorial Optimization Spring 2018 The Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Lecture 23: November 19

Interior Point Algorithms for Constrained Convex Optimization

Barrier Method. Javier Peña Convex Optimization /36-725

Lecture 11: October 2

Lecture 1 Introduction

Lecture 1: September 25

IE 521 Convex Optimization Homework #1 Solution

Lecture notes on the ellipsoid algorithm

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 9: September 28

10. Ellipsoid method

Optimisation and Operations Research

Lecture 6: September 19

Nonlinear Optimization for Optimal Control

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

CO 250 Final Exam Guide

Theory and Internet Protocols

Applications of Linear Programming

Optimization: Then and Now

Semidefinite Programming

Complexity of linear programming: outline

Lecture 14 Ellipsoid method

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Introduction to optimization

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Interior Point Methods for Mathematical Programming

Lecture 4: January 26

Primal/Dual Decomposition Methods

Lecture #21. c T x Ax b. maximize subject to

2.098/6.255/ Optimization Methods Practice True/False Questions

Lecture 18: Optimization Programming

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Convex Optimization and l 1 -minimization

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Lecture 17: October 27

Lecture 3: Semidefinite Programming

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

CSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Lecture 6 : Projected Gradient Descent

Lecture 9: Numerical Linear Algebra Primer (February 11st)

15.081J/6.251J Introduction to Mathematical Programming. Lecture 18: The Ellipsoid method

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 9 Tuesday, 4/20/10. Linear Programming

Analytic Center Cutting-Plane Method

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

minimize x subject to (x 2)(x 4) u,

A Simpler and Tighter Redundant Klee-Minty Construction

Math 5593 Linear Programming Week 1

Review Solutions, Exam 2, Operations Research

IE 5531: Engineering Optimization I

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Polynomiality of Linear Programming

Module 04 Optimization Problems KKT Conditions & Solvers

10-704: Information Processing and Learning Fall Lecture 10: Oct 3

Lecture 23: November 21

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Lecture 25: Subgradient Method and Bundle Methods April 24

10 Numerical methods for constrained problems

Interior-Point Methods for Linear Optimization

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Transcription:

10-725: Optimization Fall 2012 Lecturer: Barnabas Poczos Lecture 15: October 15 Scribes: Christian Kroer, Fanyi Xiao Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. This lecture s notes illustrate some uses of various L A TEX macros. Take a look at this and imitate. 15.1 Outline Linear programs Simplex algorithm Running time of Simplex Cutting planes and ellipsoid methods for linear programming Cutting planes and ellipsoid method for unconstrained minimization 15.2 Linear Programs The inequality form of LPs: min c T x s.t. Ax b l x u The standard form of LPs: min c T x s.t. Ax b x 0, b 0 Any LP can be written into the standard form. 15.3 Simplex Method The Simplex algorithm is a combinatorial search algorithm for solving linear programs. Conceptually, it jumps from one intersection of half-planes to another and increase/decrease the objective along this process 15-1

15-2 Lecture 15: October 15 until convergence. The running complexity of the simplex method has long been believed to be polynomial in the size of the problem (m, n), where m is the number of constraints whereas n is the number of variables. The worst case is to numerate all possible configurations of basic variables which results in Cn m n! = (n m)!m! steps. However, the Klee-Minty example demonstrates that the running time is not polynomial 1 max x 1,...,x n j=1 s.t. 2 n 10 n j x j i 10 i j x j + x i 100 i 1, i = 1, 2,..., n j=1 x j 0 This example requires 2 n 1 pivot steps to get the optimal solution. After shown to be not a polynomial algorithm, people start looking for new algorithms other than simplex. Khachiyan s ellipsoid method is the first to be proved running at polynomial complexity for LPs. However, it is usually slower than simplex in real problems. Karmarkar s method is the first algorithm works reasonably well in practice while has polynomial complexity in theory. It falls into the category of interior point method. 15.4 The ellipsoid method for linear programs 15.4.1 The feasibility problem Given some polyhedral set Ω = {y R m : y T a j c j, j = 1,..., n} The feasibility problem is to find some x Ω. It is possible to prove that solving the feasibility problem is equivalent to solving general LPs. 15.4.2 The ellipsoid method We now start to describe how the ellipsoid method works. For this we make the following assumptions: (A1) Ω can be covered with a finite ball of radius R That is: Ω {y R m : y y 0 R} = S(y 0, R) We assume that R, y 0 are both known to us (A2) There exists a ball with radius r that fits inside of Ω That is, there exists r, y such that S(y, r) Ω r is known, y is unknown We will iteratively compute smaller and smaller y 0, R until y 0 is inside Ω. For this we will need ellipsoids. 1 For a concrete example, please see http://www.math.ubc.ca/~israel/m340/kleemin3.pdf

Lecture 15: October 15 15-3 Definition 15.1 (Ellipsoid) An ellipsoid is a set E = {y R m : (y z) T Q(y z) 1} Where z R m is the center, and Q R n m is positive definite. Some properties of ellipsoids are: Axes of the ellipsoid are the eigenvectors of Q Lengths of the axes are λ 1 2 1,..., λ 1 2 m Volume of the ellipsoid is equal to the volume of the unit sphere times the determinant of Q 1 2 : VOL(E) =VOL(S(0, 1))DET(Q 1 2 ) =VOL(S(0, 1)) i λ 1 2 i As mentioned, the ellipsoid method works by construction a series of ellipsoids, we let E k denote the k-th ellipsoid. The center of the k-th ellipsoid is y k, and the parameter matrix is Q = B 1 k, where B k 0. At each iteration k, we have Ω E k. We can then check whether y k Ω. If it is, then we have found an element of Ω, and we re done. If not, there is at least one constraint that y k violates. Suppose it is the jth constraint, we have: a T j y k > c j. It follows from this: Ω E k {y : a T j y a T j y k } By the definition of the ellipsoid, we have that for all points y in the ellipsoid, a T j y c j. Since the jth constraint is violated, we have c j < a T j y k, which gives the intersection above. We have that {y : a T j y at j y k} is a halfspace, and that it cuts the ellipsoid in half. Let 1 2 E k = E k {y : a T j y a T j y k } Now, the next step is to fit a new ellipsoid around this smaller set. Formally, the successor ellipsoid E k+1 is defined to be the minimal volume ellipsoid containing 1 2 E k. It is constructed as follows: First, we define: τ = 1 m + 1, δ = m2 m 2 1, σ = 2τ τ y k+1 = y k (a T j B b ka j ) 1 k a j 2 B k+1 = δ(b k σ B ka j a 2 j B k a T j B ka j We will not prove that this is indeed the minimizer here. Now, we compare the ratios of the two ellipsoids. We know that E k+1 = E(y k+1, B 1 k+1 ) 1 2 E k

15-4 Lecture 15: October 15 Theorem 15.2 VOL(E k+1 ) VOL(E k ) 1 = ( m 2 1 )(m 1)/2 < exp( 2(m + 1) ) < 1 m 2 Again, we don t prove this. Convergence For the initial step, Ω {y R m : y y 0 R} Where y 0, R are known. We start the ellipsoid method from the S(y 0, R) ellipsoid. Now, consider the ratio VOL(E 2m ) VOL(E 0 ) = VOL(E 2m) VOL(E 2m 1 )... VOL(E 1) VOL(E 0 ) We can upper bound this using Theorem 15.2. We get VOL(E 2m ) VOL(E 2m 1 )... VOL(E 1) VOL(E 0 ) exp( 2m 2(m + 1 ) < 1 2 Intuitively, this means that if we run the ellipsoid method for 2m steps, then the volume of the ellipsoid will be halved. Thus, in O(m) iterations, the ellipsoid method can reduce the volume of the ellipsoid to half of its initial volume. How many iterations do we need to get into S(y, r)? We start from the sphere S(y 0, R). Its volume is VOL(E 0 ) = cr m. After k steps, we want an ellipsoid with volume VOL(E k ) cτ m. We get VOL(E k ) VOL(E 0 ) ( τ R )m ( 1 2 ) k m Where the last inequality is because we halve the size in every m steps. Rewriting, we get m log τ R k m log 1 2 m log R τ k m log 2 k O(m 2 log R τ ) Hence we can reduce the volume to less than that of a sphere of radius r in O(m 2 log R τ ) iterations. A single iteration of the ellipsoid method requires O(m 2 ) operations. O(m 4 log R τ ) operations. Thus, we can solve the feasibility problem. Thus, the entire process requires

Lecture 15: October 15 15-5 15.5 Solving general LPs with the ellipsoid method Consider a primal LP: max s.t. c T x Ax b x 0 and its dual: min s.t. y T b y T A c T y 0 From duality theory we know that for any x, y such that they are primal and dual feasible solutions, we have c T x b T y. For optimal x, y, we know that c T x = b T y. Thus, we can write the feasibility program c T x + b T y 0 Ax b A T y c x, y 0 Where any feasible x, y for this problem must be primal and dual optimal solutions. Thus, we can bound the number of operations needed for solving a linear program by: O((m + n) 4 log R τ ) 15.6 Cutting Plane and Ellipsoid Method for Unconstrained Convex Optimization Initialize with an ellipsoid containing the optimal point: x ɛ k = {x R n : (x x k ) T P 1 k (x x k) 1} Now if we know how to compute f, the subgradient of the function f(x), we can use the following inequality: f(x k )(x x k ) f(x ) f(x k ) 0 With this, we can draw a line perpendicular to f(x k ) to cut the plane into 2 half-planes and have x in one of them: x H(x k ) = {x R n : f(x k ) T x f(x k ) T x k } Therefore we have: x H(x k ) ɛ k = {x R n : f(x k ) T x f(x k ) T x k and (x x k ) T P 1 k (x x k) 1}

15-6 Lecture 15: October 15 Thus, we can construct a new ellipsoid ɛ k+1 with the minimum volume containing the set H(x k ) ɛ k : H(x k ) ɛ k ɛ k+1 We iterate the above procedure until we satisfy the stopping criteria f(x k ) f(x ) ɛ. The explicit form of constructing new ellipsoid is as follows: Where g k+1 = g k+1 g T k+1 P k g k+1. x k+1 =x k 1 n + 1 P k g k+1 P k+1 = n2 n 2 1 (P k 2 n + 1 P k g k+1 g k+1p T k ) The cutting plane method could also be used for constrained problem as long as it is easy to compute the intersection of the feasible region and the set H(x k ) ɛ k. We can update ɛ k + 1 as the ellipsoid with the minimum volume containing H(x k ) ɛ k C, where C is the feasible region of the problem. References [LY] D.G. Luenberger and Y. Ye, Linear and Nonlinear Programming, 2008 [BV] S. Boyd and L. Vandenberghe, Convex Optimization, 2004