Optimisation and Operations Research

Similar documents
Numerical Optimization

Optimisation and Operations Research

CS711008Z Algorithm Design and Analysis

Optimization (168) Lecture 7-8-9

Topics in Theoretical Computer Science April 08, Lecture 8

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Optimisation and Operations Research

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Lecture 15: October 15

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Lecture 6 Simplex method for linear programming

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

IE 5531: Engineering Optimization I

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

The Simplex Algorithm: Technicalities 1

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Nonlinear Optimization for Optimal Control

Theory and Internet Protocols

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

Lecture 6: Conic Optimization September 8

LINEAR AND NONLINEAR PROGRAMMING

Numerical Optimization. Review: Unconstrained Optimization

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Notes taken by Graham Taylor. January 22, 2005

Lecture 13: Linear programming I

Interior Point Methods for Mathematical Programming

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Another max flow application: baseball

Andrew/CS ID: Midterm Solutions, Fall 2006

ORF 522. Linear Programming and Convex Analysis

IE 5531: Engineering Optimization I

On the Number of Solutions Generated by the Simplex Method for LP

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Optimization with COMSOL Multiphysics COMSOL Tokyo Conference Walter Frei, PhD Applications Engineer

Linear Programming: Simplex

Computing Minmax; Dominance

Optimisation and Operations Research

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

A Strongly Polynomial Simplex Method for Totally Unimodular LP

36106 Managerial Decision Modeling Linear Decision Models: Part II

Lecture 5: Computational Complexity

Lecture 1 Introduction

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

IE 5531 Midterm #2 Solutions

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Lecture 7: The Satisfiability Problem

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Complexity of linear programming: outline

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Lecture 18: Optimization Programming

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Linear Programming. Chapter Introduction

Michælmas 2012 Operations Research III/IV 1

Math 5593 Linear Programming Week 1

Optimisation and Operations Research

The Ellipsoid (Kachiyan) Method

The quest for finding Hamiltonian cycles

15-780: LinearProgramming

An introductory example

Review Solutions, Exam 2, Operations Research

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Today: Linear Programming (con t.)

Integer Linear Programs

Initial feasible origin: 1. Set values of original variables to zero. 2. Set values of slack variables according to the dictionary.

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Analysis of Algorithms I: Perfect Hashing

CS 6820 Fall 2014 Lectures, October 3-20, 2014

7. Lecture notes on the ellipsoid algorithm

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Part IB Optimisation

Lecture notes on the ellipsoid algorithm

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

On the number of distinct solutions generated by the simplex method for LP

The Transform and Conquer Algorithm Design Technique

Scientific Computing: Optimization

Multidisciplinary System Design Optimization (MSDO)

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Lecture 3: Semidefinite Programming

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Optimization Concepts and Applications in Engineering

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

A Simpler and Tighter Redundant Klee-Minty Construction

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

February 17, Simplex Method Continued

2.098/6.255/ Optimization Methods Practice True/False Questions

Appendix A Taylor Approximations and Definite Matrices

Primal-Dual Interior-Point Methods

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

6.854J / J Advanced Algorithms Fall 2008

Transcription:

Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan <matthew.roughan@adelaide.edu.au> http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School of Mathematical Sciences, University of Adelaide July 12, 2017

Section 1 Linear Programming Revisited July 12, 2017 2 / 23

Other Issues for LPs Other approaches to solve LPs We looked at Simplex There are other approaches Preprocessing Cleaning up the problem can make subsequent parts faster and more reliable What about non-linear,...? July 12, 2017 3 / 23

Section 2 Interior Point Methods July 12, 2017 4 / 23

Boundary value and Interior point methods There are two classes of methods for solving LPs Boundary value methods searches around the vertices (boundary) of the feasible region e.g., Simplex Interior point method find a feasible point inside region, and move towards the boundary e.g., Ellipsoid and Karmakar s method July 12, 2017 5 / 23

Simplex is exponential In the practical we show that Simplex takes an exponential number of steps on the Klee-Minty example In the worst case instance, Simplex takes exponential time Interior point methods were developed in the pursuit of methods with polynomial worst-case time complexity. July 12, 2017 6 / 23

Boundary value and Interior point methods Interior point methods were developed in the pursuit of methods with polynomial worst-case time complexity. Historically: Date Method (Developer) Worst case Average case 1947 Simplex (Dantzig) exp poly 1979 Ellipsoid (Khachiyan*) poly poly 1984 Interior point (Karmakar) poly poly. According to this table, you might expect the Ellipsoid Method to out-perform the Simplex Method. The ellipsoid method was a big theoretical breakthrough, but it turned out to be too slow for practical problems, despite its polynomial (in comparison to exponential) worst-case running time rating. July 12, 2017 7 / 23

Boundary value and Interior point methods Two points that are noteworthy Worst case analysis, and Big-O hide details exponential might not be that slow for the problem you want to solve worst cases may be rare for real problems Interior point methods have moved forward it s not always obvious which will be best for your problem, but you should know they exist they lead towards interesting algorithms for non-linear optimisation July 12, 2017 8 / 23

Interior-point methods Follow a set of interior, feasible points Converge towards optimum, but don t exactly reach it Often work by transforming the problem at each step Use ideas like gradient projections these can generalise to non-linear problems you ll see more of these if you do Optimisation III July 12, 2017 9 / 23

Karmarker s method Example max z = f (x, y) = x s.t. x + y = 2 and x, y 0. Obviously the answer is z = 2, at x = 2, y = 0. The feasible region here is just a line segment AB from (0, 2) to (2, 0). Now suppose you pick any point C A or B, as a starting point on that line segment. Note that the direction of steepest ascent of z is the direction f (x, y) = (1, 0) = d T, say (from Maths 1B). So from C, to maximise a function z = f (x, y), we would normally head off in the direction d = (1, 0) T. In this case, we would move off horizontally from C and out of the feasible region! July 12, 2017 10 / 23

Karmarker s method If we move off from C in the direction d, we would need to stop somewhere along that gradient d = (1, 0), and then project back onto the feasible region (our line segment) to a point D, then we will be closer to our maximiser. CD is called a projected gradient and has allowed us to improve on our value of z. If we repeat the procedure over and over, with small enough steps, we should end up close to B. Karmarkar s Method essentially does something like this. July 12, 2017 11 / 23

Karmarkar s Interior Point Method 1 For a problem in the form where x R n, min z = 0, rank(a) = m, A is m n and ( 1 n, 1 n,..., 1 ) T n is feasible. (P 1 ) min z = c T x s.t. Ax = 0 e T x = 1 x 0 We assume we will be satisfied with a feasible point having an optimal z-value [0, ε), for some small ε > 0. July 12, 2017 12 / 23

Karmarkar s Interior Point Method 2 Initialise: Set x (0) = ( 1 n, 1 n,..., 1 n )T. Compute r = 1 n(n 1) and set α = Compute c T x (0) and set k = 0. (n 1). 3n While c T x (k) ε (for some given ε) [ ADk Set D k = diag(x (k) 1, x (k) 2,..., x n (k) ) and M = e ( T Compute c (k) = I M ( T MM ) ) T 1 ( M Dk c ). End Compute y (k+1) = ( 1 n, 1 n,..., 1 n ) T + αr Compute x (k+1) = D ky (k+1) e T D k y (k+1) Compute c T x (k+1) and set k = k + 1. c (k) c (k). (vector in the original space). July 12, 2017 13 / 23 ].

Remarks About Karmarkar The direction c (k) is the projection of the transformed steepest descent direction, D k c in the transformed problem. So finding y (k) is making sure we find a feasible point in the transformed space and we have maximised the rate of decrease in the z-value. The αr term ensures y (k+1) stays in the interior of the feasible region, i.e., away from the boundary of the transformed unit simplex. The inverse transformation back from y (k+1) to x (k+1) implies x (k+1) will be feasible for (P 1 ). July 12, 2017 14 / 23

Karmarkar Example Results to 4dp for ε = 0.0005 = 5 10 4 are given below. k (x k ) T (y k ) T z 0 (0.3333, 0.3333, 0.3333) (0.3333, 0.3333, 0.3333) 0.3333 1 (0.3975, 0.3333, 0.2692) (0.3975, 0.3333, 0.2692) 0.2692 2 (0.4574, 0.3333, 0.2093) (0.3930, 0.3415, 0.2655) 0.2093 3 (0.5090, 0.3333, 0.1576) (0.3883, 0.3489, 0.2628) 0.1576 4 (0.5507, 0.3333, 0.1160) (0.3839, 0.3549, 0.2612) 0.1160 5 (0.5827, 0.3333, 0.0840) (0.3803, 0.3594, 0.2602) 0.0840.... 19 (0.6661, 0.3333, 0.0006) (0.3704, 0.3703, 0.2593) 0.0006 20 (0.6662, 0.3333, 0.0004) (0.3704, 0.3703, 0.2593) 0.0004 For ε = 0.00000005 = 5 10 8, to 8dp, the procedure stops at x (46) = (0.66666663, 0.33333333, 0.00000004) T and z = 0.00000004. July 12, 2017 15 / 23

Section 3 Preprocessing for optimisation July 12, 2017 16 / 23

Preprocessing Typical tasks put problem into standard form remove redundant constraints or variables rescale coefficients July 12, 2017 17 / 23

Preprocessing in Matlab s linprog before Simplex For row k, take the constraint a T k x = b k compute upper and a lower bounds of a T k x (if finite) if the bound = bk, then the constraint is forcing it fixes the values of xi corresponding to non-zero values of a k, so they can be eliminated from the problem Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables. Check if any inequality constraint involves just one variable. If so, check for feasibility, and change the linear constraint to a bound. Check if any equality constraint involves just one variable. If so, check for feasibility, and then fix and remove the variable. Check if any constraint matrix has a zero row. If so, check for feasibility, and delete the row. Check if the bounds and linear constraints are consistent. Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and fix the variables at their appropriate bounds. July 12, 2017 18 / 23

Section 4 Where to next? July 12, 2017 19 / 23

Classes of optimisations Linear v non-linear Continuous v discrete (integer) Unconstrained v constrained Deterministic v Stochastic Problem structure Low- v high-dimensional Sparse v Dense Single- v Multi-objective Static v Dynamic Distributed v centralised Taxonomy of optimisation problems http://www.neos-guide.org/content/optimization-taxonomy July 12, 2017 20 / 23

Non-linear programming Combinations of objective and constraint properties ignoring for the moment integer v continous variables All problems Simple domain Convex Quadratic Linear We saw how to tackle these by linear approximation in your project this year See Optimisation III next year. July 12, 2017 21 / 23

Stochastic programming What happens if we are uncertain about some values? simple approach is sensitivity analysis to understand how output vary with input errors but that doesn t tell you what to do Ultimately optimisation is about making decisions stochastic decision theory is about making optimal decisions with (known) uncertainty many techniques robust optimisation: some just involve (static) optimisation over a (stochastic) family of possibilities Markov decision theory: dynamic stochastic process, we try to optimisation each step game theory: the unknown is another player, who is trying to oppose your optimisation, or perform his own See Stochastic Decision Theory III next year. July 12, 2017 22 / 23

Further reading I July 12, 2017 23 / 23