Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Similar documents
NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

2.098/6.255/ Optimization Methods Practice True/False Questions

Lecture 18: Optimization Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Constrained Optimization

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Generalization to inequality constrained problem. Maximize

Numerical optimization

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Nonlinear Programming and the Kuhn-Tucker Conditions

ICS-E4030 Kernel Methods in Machine Learning

Numerical Optimization

IE 5531 Midterm #2 Solutions

Nonlinear Optimization: What s important?

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

MATH2070 Optimisation

minimize x subject to (x 2)(x 4) u,

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical Optimization. Review: Unconstrained Optimization

More on Lagrange multipliers

Machine Learning. Support Vector Machines. Manfred Huber

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

5. Duality. Lagrangian

Applications of Linear Programming

Optimality Conditions

Constrained optimization

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Convex Optimization & Lagrange Duality

Constrained Optimization and Lagrangian Duality

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

The Kuhn-Tucker Problem

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters )

Convex Optimization M2

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

Scientific Computing: Optimization

Support Vector Machines: Maximum Margin Classifiers

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

School of Business. Blank Page

CO 250 Final Exam Guide

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Convex Optimization Boyd & Vandenberghe. 5. Duality

Nonlinear Programming (NLP)

Review of Optimization Basics

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

Written Examination

Convex Optimization Lecture 6: KKT Conditions, and applications

A Brief Review on Convex Optimization

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

CS-E4830 Kernel Methods in Machine Learning

Optimization. A first course on mathematics for economists

Lecture: Duality.

Lecture: Duality of LP, SOCP and SDP

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Lagrange Relaxation and Duality

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Support Vector Machines

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

subject to (x 2)(x 4) u,

4TE3/6TE3. Algorithms for. Continuous Optimization

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Linear Programming. H. R. Alvarez A., Ph. D. 1

5 Handling Constraints

56:270 Final Exam - May

Bilinear Programming: Applications in the Supply Chain Management

Interior-Point Methods for Linear Optimization

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Mathematical Economics. Lecture Notes (in extracts)

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Introduction to linear programming using LEGO.

Introduction to Operations Research. Linear Programming

Optimization Methods

Midterm Exam - Solutions

Economics 101A (Lecture 3) Stefano DellaVigna

The Dual Simplex Algorithm

EE/AA 578, Univ of Washington, Fall Duality

Optimization Tutorial 1. Basic Gradient Descent

Linear programming I João Carlos Lourenço

Introduction to Operations Research

Exam in TMA4180 Optimization Theory

Computational Finance

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

Transportation Algorithm with Volume Discount on Distribution Cost (A Case Study of the Nigerian Bottling Company Plc Owerri Plant)

Lecture 3. Optimization Problems and Iterative Algorithms

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Econ Slides from Lecture 14

Linear and Combinatorial Optimization

Pattern Classification, and Quadratic Problems

Lecture 4 September 15

Optimeringslära för F (SF1811) / Optimization (SF1841)

Transcription:

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012

Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 2

Problem Formulation Importance of linear programming is emphasized in the course Practical optimization problems frequently involve nonlinear behavior that must be taken into account General form of the nonlinear programming problem is: find =(,,, ) to maximize ( ) subject to, for = 1,2,, and 0 3

Sample example 1: The product mix problem with price elasticity Wyndor Glass Co. example Source of nonlinearity: amount of product that can be sold has an inverse relationship to the price charged Price Unit cost Demand 19/4/2012 4

Sample example 1: The product mix problem with price elasticity The profit to be maximize is = ( ) the unit price if units sold production and distribution cost of one unit profit amount 5

Sample example 2: Transportation problem with volume discounts Transportation problem with multiple sources and multiple destinations, and given supply and demands capacities Volume discounts are available for large shipments: Marginal costs Amount shipped 6

Sample example 2: Transportation problem with volume discounts The total shipment costs are: = ( ), is the source, is the destination The shipment costs from a source to a destination has a piecewise constant slope Total costs Amount shipped 7

Sample example 3: Tennessee Eastman process The relations between the process measurements (controlled variables) and the manipulated variables are highly nonlinear 2/19/2010 Word template user guide 8

Lecture content Problem formulation and sample examples Theoretical background (ch. 13.2) Graphical illustrations of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 9

Graphical illustration 1: a nonlinear constraint Wyndor glass example Both the second and the third constraints are replaced by the single nonlinear constraint: 9 +5 216 x 2 6 Optimal solution 4 2 Feasible Region 2 4 x 1 10

Graphical illustration 2: a nonlinear objective Objective function made nonlinear: = 126 9 + 182 13 x 2 6 4 2 Feasible Region Optimal solution Z = 907 Z = 857 Z = 807 2 4 x 1 11

Graphical illustration 3: a nonlinear objective Objective function made nonlinear: = 54 9 + 78 13 x 2 6 4 Optimal solution 2 Z = 189 Z = 162 Z = 117 2 4 x 1 12

Graphical illustration of NP problems: summary In contrast to linear programming, the optimal solution may not be a corner-point There is no longer the tremendous simplification used by linear programming of limiting the search to just corner-points The simplex method is inapplicable to solve nonlinear programming problems 13

Lecture content Problem formulation and sample examples Theoretical background (ch 13.2) Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 14

Necessary optimality conditions: unconstrained case The necessary optimality condition is: ( ) ( ) =0, the tangent line to the function graph is horizontal 0, for a maximum f(x) x 15

Necessary optimality conditions: an equality constraint Geometric interpretation: Both constraint line and the objective line have the same tangent line = ( ) ( ) Tangent line ( ) Objective level lines Equality constraint 16

Necessary optimality conditions: an equality constraint Equality: = ( ) is equivalent to the Lagrange equations ( ) = 0, = 1,2,, = A system of n+1 equations with n+1 variables 17

Necessary optimality conditions: an inequality constraint Karush-Kuhn-Tucker conditions are: ( ) ( ) 0 ( ) ( ) =0 0 =0 0, 0 Lagrange multiplier corresponds to the dual variable of linear programming 18

Necessary optimality conditions: Summary Problem One-dimensional, unconstrained Multi-dimensional, unconstrained Multi-dimensional, equality constraint Multi-dimensional, inequality constraint Optimality conditions ( ) = 0, ( ) 0 ( ) = 0, = 1,, matrix H is negatively defined Lagrange equations: ( ) = 0, = 1,, = KKT conditions 19

Necessary optimality conditions: Summary Role of the optimality conditions: Transform a nonlinear programming task to a system of equations Stopping rules for iterative optimization algorithms Sensitivity analysis (for constrained problems) 2/19/2010 Word template user guide 20

Lecture content Problem formulation and sample examples Theoretical background (ch 13.2) Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 21

Sufficient optimality conditions: unconstrained case The necessary conditions are not able to distinguish a local and a global maximum Sufficient conditions are needed to ensure that the solution obtained is globally optimum f(x) local maximums global maximum x 22

Sufficient optimality conditions: unconstrained case A local optimum is guaranteed to be global if: Objective to be maximized is concave Objective to be minimized is convex f(x) global maximum x 23

Sufficient optimality conditions: constrained case Wyndor glass example Constraints have been replaced: 8 + 14 49 x 2 6 global maximum 4 2 Feasible Region is not convex local maximum 2 4 x 1 24

Sufficient optimality conditions: constrained case A local optimum is guaranteed to be global if: Maximize/minimize a concave/convex objective Feasible region is convex (holds, if g(x) is a convex function) A nonlinear programming problems satisfying to these two conditions is called a convex programming problem Convex programming is one of the key types of NP problems If a local optimum is found it can be guaranteed to be global In this lecture only convex programming problems are considered 2/19/2010 Word template user guide 25

Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems (ch 13.3) One dimensional unconstrained problems (ch 13.4) Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 26

Main types of nonlinear programming problems Problem Conditions to ensure global optimum Algorithm Unconstrained, 1-D Concave/convex objective Interval splitting Unconstrained Concave/convex objective Gradient search Quadratic programming Concave/convex objective Separable program. Separable concave objective Linear constraints Not studied Reduced to a linear programming task Linearly constrained Concave/convex objective Frank-Wolfe 27

Unconstrained optimization: one dimensional The task is to maximize a concave objective The necessary optimality conditions may not be resolved analytically, therefore a numerical procedure is needed One dimensional search procedure initialization: At the beginning and are found such that: ( ) ( ) > 0, <0 Interval, contains the objective maximum 28

Unconstrained optimization: one dimensional case An iterative algorithm: Check the sign of Reset =( + )/2, if ( ) reset =( + )/2, if ( ) f(x) at the middle of the current interval >0, or <0 =( + )/2 x 29

Unconstrained optimization: one dimensional case Stopping rule: Stop if < Return + /2 30

Unconstrained optimization: one dimensional case. An example The function to be maximized is: = 12 3 2 The derivative of the objective function is: ( ) = 12 12 12 The second derivative ( ) 36 60 0 therefore the objective is concave and the one dimensional search will find the global maximum 31

( ) Unconstrained optimization: one dimensional case. An example The initial interval is selected to be [0, 2] = 12 3 2 = 12 12 12 Iteration 0 1 2 3 4 5 6 7 Stop f x) x ( x x ' Newx f (x') 0 2 1 7,0000-12 0 1 0,5 5,7812 + 10,12 0,5 1 0,75 7,6948 + 4,09 0,75 1 0,875 7,8439-2,19 0,75 0,875 0,8125 7,8672 + 1,31 0,8125 0,875 0,84375 7,8829-0,34 0,8125 0,84374 0,828125 7,8815 + 0,51 0,828125 0,84375 0,8359375 7,8839 32

Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems (ch 13.5) Quadratic programming Separable programming Convex programming with linear constraints 33

Unconstrained optimization: the gradient search procedure The task is to maximize a concave function ( ) of many arguments The efficient procedure should keep moving in the direction of the gradient until it reaches the optimum solution The gradient of a function at a point is a vector defining the direction of the fastest growth of the function at this point The gradient elements are the respective partial derivatives: = ( ), ( ),, ( ) 34

Unconstrained optimization: summary of the gradient search procedure Initialization. Select >0and any initial trial solution Iterations: Compute the gradient of the objective at that point ( ) Express + as a function of Use a one-dimensional search procedure to find > 0 that maximizes + Reset the current trial solution = + ( ) Stopping rule: ( ) <, for all = 1,, 35

Unconstrained optimization: the gradient search procedure. An example Maximize starting from = (0, 0) =2 +2 2 The gradient of the objective is: 2 = 2 2 +2 4 36

Unconstrained optimization: = the gradient search procedure. An example First iteration Finding gradient = 0,0 = (0, 2) Expressing the function of + = 0+0,0+ =0+ 8 Finding the optimal : = 1/4 Resetting the current setpoint = + = 0,0 + 1/4 0,2 = (0, 1/2) = 2 + 2 2 2 2 2 +2 4 37

Unconstrained optimization: = the gradient search procedure. An example = 2 + 2 2 2 2 2 +2 4 Iteration 1 2 x ' (x') f ' t f ( x' ) x+ ( x' t f ( x' )) f + * t x+t ' * f( x' ) (0,0) (0,2) (0,2t) 4t-8t 2 1/4 (0,1/2) (0,1/2) (1,0) (t,1/2) t-t 2 +1/2 1/2 (1/2,1/2) x 2 (1/2,3/4) (3/4,7/8) (3/4,3/4) x* = (1,1) (0,1/2) (1/2,1/2) (0,0) x 1 38

Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming (ch 13.7) Separable programming Convex programming with linear constraints 39

Quadratic programming Similar to linear programming, but the objective function includes and in addition to the linear terms The problem formulation: maximize = /2= subject to, 0 the global maximum is found if the objective is concave ( is a positively defined matrix) The optimality conditions (KKT conditions) can be used to solve a quadratic programming problem 40

Quadratic programming. An example Maximize, = 15 + 30 +4 2 4 subject to +2 30,, 0 In matrix form: = (15, 30) 4 = 4 8 = 1,2 = 30 41

Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming (ch 13.8) Convex programming with linear constraints 42

Separable programming Separable objective function Means that the objective is a sum of functions of individual variables = + + + ( ) Examples are product mix with price elasticity, transportation with volume discounts is contribution to profit of activity. Problem formulation: Maximizing a separable concave function ( ) Subject to linear constraints 43

Separable programming Objective is concave if every ( ) is concave In other words, the margin profitability either stays the same or decreases Concave curves occur quite frequently Profit Production rate 44

Separable programming Approximate every term by a piecewise linear function Introduce a separate variable for every segment and transform the problem to a LP task Profit ( )= = + + + Production rate 19/04/012 45

Separable programming to LP The problem is reformulated in terms of instead of The objective in terms of new variables: is an activity number, is a segment number ( ) ) 46

Separable programming to LP The length of the segments inequalities Original inequalities There is a special condition on variables (a segment cannot be started until the previous segment is full), =0, if, <, automatically has higher priority than and others, so this condition is fulfilled automatically 47

Separable programming. An example Wyndor glass example At volumes higher than 3 extra costs occur. The profitability decreases: For product 1: from 3 to 2 For product 2: from 5 to 1 2/19/2010 Word template user guide 48

Separable programming. An example New variables: = +, 3, = +, 3, The LP task is: Maximize =3 +2 +5 + Constraints imposed on variables Original constraints 4 New constraints + 4 2 12 2( + 12 3 +2 18 3 + + 2( + 18 49

Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints (ch 13.9) 50

Convex programming Only linearly constrained case is considered in the lecture Problem formulation Maximize a concave objective ( ) Subject to linear constraints 0, for all = 1,, Because of the constraints it is not always possible to move in the direction defined by the objective gradient The gradient search method must be modified to take into account the constraints 51

Convex programming. Frank-Wolfe method The Frank-Wolfe algorithm can be applied to such problems It combines linear approximation of the objective with the onedimensional search procedure An LP problem is solved to define the direction Initialization: find a feasible trial solution 52

Convex programming. Frank-Wolfe method. Iterating Compute the gradient at the current trial solution = ( ) Find the optimal solution of the following LP: max =, 0, =1,, Use one-dimensional search to find the optimal maximizing + Reset the trial solution: = + ( ) 53

Convex programming. Frank-Wolfe method. An example Maximize subject to =5 +8 2 3 +2 6, 0 Initial trial solution is decided to be = (0, 0) The objective gradient is: = 2 4 54

3 Convex programming. +2 6 Frank-Wolfe method. The first iteration The gradient is 0,0 = 5,8 Solving LP: Maximize =5 +8 Subject to 3 +2 6,, 0 The optimal solution is = (0, 3) Finding optimal Maximize 0+,0+ =0 0+24 18 Optimal = 2/3 Reset the trial solution = + = 5 + 8 2 = 0, 0 + 2/3(0,3) = (0, 2), 0 55

Convex programming. Frank-Wolfe method. The first iteration The method converges to the optimal point (1, 3/2) 56

Questions What are the necessary optimality conditions for different types of NP problems? What is the role of the optimality conditions? What is a convex programming problem. What is the key property of such problems? Which types of nonlinear programming problem your are familiar with? What are the optimization methods? What is the gradient of a function? Why is the gradient so important in the multivariate optimization? 57