Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Size: px
Start display at page:

Download "Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control"

Transcription

1 Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012

2 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 2

3 Problem Formulation Importance of linear programming is emphasized in the course Practical optimization problems frequently involve nonlinear behavior that must be taken into account General form of the nonlinear programming problem is: find =(,,, ) to maximize ( ) subject to, for = 1,2,, and 0 3

4 Sample example 1: The product mix problem with price elasticity Wyndor Glass Co. example Source of nonlinearity: amount of product that can be sold has an inverse relationship to the price charged Price Unit cost Demand 19/4/2012 4

5 Sample example 1: The product mix problem with price elasticity The profit to be maximize is = ( ) the unit price if units sold production and distribution cost of one unit profit amount 5

6 Sample example 2: Transportation problem with volume discounts Transportation problem with multiple sources and multiple destinations, and given supply and demands capacities Volume discounts are available for large shipments: Marginal costs Amount shipped 6

7 Sample example 2: Transportation problem with volume discounts The total shipment costs are: = ( ), is the source, is the destination The shipment costs from a source to a destination has a piecewise constant slope Total costs Amount shipped 7

8 Sample example 3: Tennessee Eastman process The relations between the process measurements (controlled variables) and the manipulated variables are highly nonlinear 2/19/2010 Word template user guide 8

9 Lecture content Problem formulation and sample examples Theoretical background (ch. 13.2) Graphical illustrations of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 9

10 Graphical illustration 1: a nonlinear constraint Wyndor glass example Both the second and the third constraints are replaced by the single nonlinear constraint: x 2 6 Optimal solution 4 2 Feasible Region 2 4 x 1 10

11 Graphical illustration 2: a nonlinear objective Objective function made nonlinear: = x Feasible Region Optimal solution Z = 907 Z = 857 Z = x 1 11

12 Graphical illustration 3: a nonlinear objective Objective function made nonlinear: = x Optimal solution 2 Z = 189 Z = 162 Z = x 1 12

13 Graphical illustration of NP problems: summary In contrast to linear programming, the optimal solution may not be a corner-point There is no longer the tremendous simplification used by linear programming of limiting the search to just corner-points The simplex method is inapplicable to solve nonlinear programming problems 13

14 Lecture content Problem formulation and sample examples Theoretical background (ch 13.2) Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 14

15 Necessary optimality conditions: unconstrained case The necessary optimality condition is: ( ) ( ) =0, the tangent line to the function graph is horizontal 0, for a maximum f(x) x 15

16 Necessary optimality conditions: an equality constraint Geometric interpretation: Both constraint line and the objective line have the same tangent line = ( ) ( ) Tangent line ( ) Objective level lines Equality constraint 16

17 Necessary optimality conditions: an equality constraint Equality: = ( ) is equivalent to the Lagrange equations ( ) = 0, = 1,2,, = A system of n+1 equations with n+1 variables 17

18 Necessary optimality conditions: an inequality constraint Karush-Kuhn-Tucker conditions are: ( ) ( ) 0 ( ) ( ) =0 0 =0 0, 0 Lagrange multiplier corresponds to the dual variable of linear programming 18

19 Necessary optimality conditions: Summary Problem One-dimensional, unconstrained Multi-dimensional, unconstrained Multi-dimensional, equality constraint Multi-dimensional, inequality constraint Optimality conditions ( ) = 0, ( ) 0 ( ) = 0, = 1,, matrix H is negatively defined Lagrange equations: ( ) = 0, = 1,, = KKT conditions 19

20 Necessary optimality conditions: Summary Role of the optimality conditions: Transform a nonlinear programming task to a system of equations Stopping rules for iterative optimization algorithms Sensitivity analysis (for constrained problems) 2/19/2010 Word template user guide 20

21 Lecture content Problem formulation and sample examples Theoretical background (ch 13.2) Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 21

22 Sufficient optimality conditions: unconstrained case The necessary conditions are not able to distinguish a local and a global maximum Sufficient conditions are needed to ensure that the solution obtained is globally optimum f(x) local maximums global maximum x 22

23 Sufficient optimality conditions: unconstrained case A local optimum is guaranteed to be global if: Objective to be maximized is concave Objective to be minimized is convex f(x) global maximum x 23

24 Sufficient optimality conditions: constrained case Wyndor glass example Constraints have been replaced: x 2 6 global maximum 4 2 Feasible Region is not convex local maximum 2 4 x 1 24

25 Sufficient optimality conditions: constrained case A local optimum is guaranteed to be global if: Maximize/minimize a concave/convex objective Feasible region is convex (holds, if g(x) is a convex function) A nonlinear programming problems satisfying to these two conditions is called a convex programming problem Convex programming is one of the key types of NP problems If a local optimum is found it can be guaranteed to be global In this lecture only convex programming problems are considered 2/19/2010 Word template user guide 25

26 Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems (ch 13.3) One dimensional unconstrained problems (ch 13.4) Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints 26

27 Main types of nonlinear programming problems Problem Conditions to ensure global optimum Algorithm Unconstrained, 1-D Concave/convex objective Interval splitting Unconstrained Concave/convex objective Gradient search Quadratic programming Concave/convex objective Separable program. Separable concave objective Linear constraints Not studied Reduced to a linear programming task Linearly constrained Concave/convex objective Frank-Wolfe 27

28 Unconstrained optimization: one dimensional The task is to maximize a concave objective The necessary optimality conditions may not be resolved analytically, therefore a numerical procedure is needed One dimensional search procedure initialization: At the beginning and are found such that: ( ) ( ) > 0, <0 Interval, contains the objective maximum 28

29 Unconstrained optimization: one dimensional case An iterative algorithm: Check the sign of Reset =( + )/2, if ( ) reset =( + )/2, if ( ) f(x) at the middle of the current interval >0, or <0 =( + )/2 x 29

30 Unconstrained optimization: one dimensional case Stopping rule: Stop if < Return + /2 30

31 Unconstrained optimization: one dimensional case. An example The function to be maximized is: = The derivative of the objective function is: ( ) = The second derivative ( ) therefore the objective is concave and the one dimensional search will find the global maximum 31

32 ( ) Unconstrained optimization: one dimensional case. An example The initial interval is selected to be [0, 2] = = Iteration Stop f x) x ( x x ' Newx f (x') , ,5 5, ,12 0,5 1 0,75 7, ,09 0,75 1 0,875 7,8439-2,19 0,75 0,875 0,8125 7, ,31 0,8125 0,875 0, ,8829-0,34 0,8125 0, , , ,51 0, , , ,

33 Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems (ch 13.5) Quadratic programming Separable programming Convex programming with linear constraints 33

34 Unconstrained optimization: the gradient search procedure The task is to maximize a concave function ( ) of many arguments The efficient procedure should keep moving in the direction of the gradient until it reaches the optimum solution The gradient of a function at a point is a vector defining the direction of the fastest growth of the function at this point The gradient elements are the respective partial derivatives: = ( ), ( ),, ( ) 34

35 Unconstrained optimization: summary of the gradient search procedure Initialization. Select >0and any initial trial solution Iterations: Compute the gradient of the objective at that point ( ) Express + as a function of Use a one-dimensional search procedure to find > 0 that maximizes + Reset the current trial solution = + ( ) Stopping rule: ( ) <, for all = 1,, 35

36 Unconstrained optimization: the gradient search procedure. An example Maximize starting from = (0, 0) = The gradient of the objective is: 2 =

37 Unconstrained optimization: = the gradient search procedure. An example First iteration Finding gradient = 0,0 = (0, 2) Expressing the function of + = 0+0,0+ =0+ 8 Finding the optimal : = 1/4 Resetting the current setpoint = + = 0,0 + 1/4 0,2 = (0, 1/2) =

38 Unconstrained optimization: = the gradient search procedure. An example = Iteration 1 2 x ' (x') f ' t f ( x' ) x+ ( x' t f ( x' )) f + * t x+t ' * f( x' ) (0,0) (0,2) (0,2t) 4t-8t 2 1/4 (0,1/2) (0,1/2) (1,0) (t,1/2) t-t 2 +1/2 1/2 (1/2,1/2) x 2 (1/2,3/4) (3/4,7/8) (3/4,3/4) x* = (1,1) (0,1/2) (1/2,1/2) (0,0) x 1 38

39 Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming (ch 13.7) Separable programming Convex programming with linear constraints 39

40 Quadratic programming Similar to linear programming, but the objective function includes and in addition to the linear terms The problem formulation: maximize = /2= subject to, 0 the global maximum is found if the objective is concave ( is a positively defined matrix) The optimality conditions (KKT conditions) can be used to solve a quadratic programming problem 40

41 Quadratic programming. An example Maximize, = subject to +2 30,, 0 In matrix form: = (15, 30) 4 = 4 8 = 1,2 = 30 41

42 Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming (ch 13.8) Convex programming with linear constraints 42

43 Separable programming Separable objective function Means that the objective is a sum of functions of individual variables = ( ) Examples are product mix with price elasticity, transportation with volume discounts is contribution to profit of activity. Problem formulation: Maximizing a separable concave function ( ) Subject to linear constraints 43

44 Separable programming Objective is concave if every ( ) is concave In other words, the margin profitability either stays the same or decreases Concave curves occur quite frequently Profit Production rate 44

45 Separable programming Approximate every term by a piecewise linear function Introduce a separate variable for every segment and transform the problem to a LP task Profit ( )= = Production rate 19/04/012 45

46 Separable programming to LP The problem is reformulated in terms of instead of The objective in terms of new variables: is an activity number, is a segment number ( ) ) 46

47 Separable programming to LP The length of the segments inequalities Original inequalities There is a special condition on variables (a segment cannot be started until the previous segment is full), =0, if, <, automatically has higher priority than and others, so this condition is fulfilled automatically 47

48 Separable programming. An example Wyndor glass example At volumes higher than 3 extra costs occur. The profitability decreases: For product 1: from 3 to 2 For product 2: from 5 to 1 2/19/2010 Word template user guide 48

49 Separable programming. An example New variables: = +, 3, = +, 3, The LP task is: Maximize = Constraints imposed on variables Original constraints 4 New constraints ( (

50 Lecture content Problem formulation and sample examples Theoretical background Graphical illustration of nonlinear programming problems Necessary optimality conditions Sufficient optimality conditions (local vs. global maximum) Algorithms to solve the main types of nonlinear programming problems One dimensional unconstrained problems Multidimensional unconstrained problems Quadratic programming Separable programming Convex programming with linear constraints (ch 13.9) 50

51 Convex programming Only linearly constrained case is considered in the lecture Problem formulation Maximize a concave objective ( ) Subject to linear constraints 0, for all = 1,, Because of the constraints it is not always possible to move in the direction defined by the objective gradient The gradient search method must be modified to take into account the constraints 51

52 Convex programming. Frank-Wolfe method The Frank-Wolfe algorithm can be applied to such problems It combines linear approximation of the objective with the onedimensional search procedure An LP problem is solved to define the direction Initialization: find a feasible trial solution 52

53 Convex programming. Frank-Wolfe method. Iterating Compute the gradient at the current trial solution = ( ) Find the optimal solution of the following LP: max =, 0, =1,, Use one-dimensional search to find the optimal maximizing + Reset the trial solution: = + ( ) 53

54 Convex programming. Frank-Wolfe method. An example Maximize subject to = , 0 Initial trial solution is decided to be = (0, 0) The objective gradient is: =

55 3 Convex programming Frank-Wolfe method. The first iteration The gradient is 0,0 = 5,8 Solving LP: Maximize =5 +8 Subject to ,, 0 The optimal solution is = (0, 3) Finding optimal Maximize 0+,0+ = Optimal = 2/3 Reset the trial solution = + = = 0, 0 + 2/3(0,3) = (0, 2), 0 55

56 Convex programming. Frank-Wolfe method. The first iteration The method converges to the optimal point (1, 3/2) 56

57 Questions What are the necessary optimality conditions for different types of NP problems? What is the role of the optimality conditions? What is a convex programming problem. What is the key property of such problems? Which types of nonlinear programming problem your are familiar with? What are the optimization methods? What is the gradient of a function? Why is the gradient so important in the multivariate optimization? 57

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Machine Learning Support Vector Machines. Prof. Matteo Matteucci Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form Graphical representation

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker 56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

School of Business. Blank Page

School of Business. Blank Page Maxima and Minima 9 This unit is designed to introduce the learners to the basic concepts associated with Optimization. The readers will learn about different types of functions that are closely related

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology 4M020 Design tools Algorithms for numerical optimization L.F.P. Etman Department of Mechanical Engineering Eindhoven University of Technology Wednesday September 3, 2008 1 / 32 Outline 1 Problem formulation:

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Convex Optimization Lecture 6: KKT Conditions, and applications

Convex Optimization Lecture 6: KKT Conditions, and applications Convex Optimization Lecture 6: KKT Conditions, and applications Dr. Michel Baes, IFOR / ETH Zürich Quick recall of last week s lecture Various aspects of convexity: The set of minimizers is convex. Convex

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Sudeshna Sarkar Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module - 5 Lecture - 22 SVM: The Dual Formulation Good morning.

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Linear Programming. H. R. Alvarez A., Ph. D. 1

Linear Programming. H. R. Alvarez A., Ph. D. 1 Linear Programming H. R. Alvarez A., Ph. D. 1 Introduction It is a mathematical technique that allows the selection of the best course of action defining a program of feasible actions. The objective of

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

Bilinear Programming: Applications in the Supply Chain Management

Bilinear Programming: Applications in the Supply Chain Management Bilinear Programming: Applications in the Supply Chain Management Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

Introduction to Operations Research. Linear Programming

Introduction to Operations Research. Linear Programming Introduction to Operations Research Linear Programming Solving Optimization Problems Linear Problems Non-Linear Problems Combinatorial Problems Linear Problems Special form of mathematical programming

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Midterm Exam - Solutions

Midterm Exam - Solutions EC 70 - Math for Economists Samson Alva Department of Economics, Boston College October 13, 011 Midterm Exam - Solutions 1 Quasilinear Preferences (a) There are a number of ways to define the Lagrangian

More information

Economics 101A (Lecture 3) Stefano DellaVigna

Economics 101A (Lecture 3) Stefano DellaVigna Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Linear programming I João Carlos Lourenço

Linear programming I João Carlos Lourenço Decision Support Models Linear programming I João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research, 9th

More information

Introduction to Operations Research

Introduction to Operations Research Introduction to Operations Research Linear Programming Solving Optimization Problems Linear Problems Non-Linear Problems Combinatorial Problems Linear Problems Special form of mathematical programming

More information

Exam in TMA4180 Optimization Theory

Exam in TMA4180 Optimization Theory Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 11 Contact during exam: Anne Kværnø: 966384 Exam in TMA418 Optimization Theory Wednesday May 9, 13 Tid: 9. 13.

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris

University of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification

More information

Transportation Algorithm with Volume Discount on Distribution Cost (A Case Study of the Nigerian Bottling Company Plc Owerri Plant)

Transportation Algorithm with Volume Discount on Distribution Cost (A Case Study of the Nigerian Bottling Company Plc Owerri Plant) American Journal of Applied Mathematics and Statistics, 04, Vol., No. 5, 38-33 Available online at http://pubs.sciepub.com/ajams//5/4 Science and Education Publishing DOI:0.69/ajams--5-4 Transportation

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79

Topic one: Production line profit maximization subject to a production rate constraint. c 2010 Chuan Shi Topic one: Line optimization : 22/79 Topic one: Production line profit maximization subject to a production rate constraint c 21 Chuan Shi Topic one: Line optimization : 22/79 Production line profit maximization The profit maximization problem

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Pattern Classification, and Quadratic Problems

Pattern Classification, and Quadratic Problems Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Optimeringslära för F (SF1811) / Optimization (SF1841)

Optimeringslära för F (SF1811) / Optimization (SF1841) Optimeringslära för F (SF1811) / Optimization (SF1841) 1. Information about the course 2. Examples of optimization problems 3. Introduction to linear programming Introduction - Per Enqvist 1 Linear programming

More information