Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Similar documents
NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

minimize x subject to (x 2)(x 4) u,

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Multidisciplinary System Design Optimization (MSDO)

Algorithms for constrained local optimization

CHAPTER 2: QUADRATIC PROGRAMMING

Lecture 13: Constrained optimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Algorithms for nonlinear programming problems II

Numerical Optimization

Algorithms for nonlinear programming problems II

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 18: Optimization Programming

SF2822 Applied nonlinear optimization, final exam Wednesday June

Applications of Linear Programming

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Constrained Optimization


Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Calculus Overview. f(x) f (x) is slope. I. Single Variable. A. First Order Derivative : Concept : measures slope of curve at a point.

Introduction to Nonlinear Stochastic Programming

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Constrained optimization: direct methods (cont.)

ICS-E4030 Kernel Methods in Machine Learning

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Mathematical Economics. Lecture Notes (in extracts)

Algorithms for Constrained Optimization

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

Mathematical Economics: Lecture 16

CS711008Z Algorithm Design and Analysis

Nonlinear Optimization: What s important?

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Optimality Conditions for Constrained Optimization

Computational Finance

4TE3/6TE3. Algorithms for. Continuous Optimization

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Optimality, Duality, Complementarity for Constrained Optimization

12. Interior-point methods

The Karush-Kuhn-Tucker conditions

Constrained optimization

Math (P)refresher Lecture 8: Unconstrained Optimization

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Written Examination

MATH 4211/6211 Optimization Basics of Optimization Problems

You should be able to...

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Computational Optimization. Augmented Lagrangian NW 17.3

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Lecture V. Numerical Optimization

Image restoration: numerical optimisation

Chapter 2: Unconstrained Extrema

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

2.3 Linear Programming

Numerical Optimization

Lecture 6: Conic Optimization September 8

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

8 Barrier Methods for Constrained Optimization

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

1 Computing with constraints

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

LP. Kap. 17: Interior-point methods

8. Constrained Optimization

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Interior Point Algorithms for Constrained Convex Optimization

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

CS-E4830 Kernel Methods in Machine Learning

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Linear and non-linear programming

Math 273a: Optimization Basic concepts

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Constrained Optimization and Lagrangian Duality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Examination paper for TMA4180 Optimization I

CONSTRAINED NONLINEAR PROGRAMMING

Lecture 10: Linear programming duality and sensitivity 0-0

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerical optimization

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Unconstrained Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Primal-Dual Interior-Point Methods

5 Handling Constraints

10 Numerical methods for constrained problems

Miscellaneous Nonlinear Programming Exercises

5.5 Quadratic programming

Lecture 9 Sequential unconstrained minimization

Scientific Computing: Optimization

Optimization Tutorial 1. Basic Gradient Descent

Roots of Equations. ITCS 4133/5133: Introduction to Numerical Methods 1 Roots of Equations

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

Transcription:

Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark

Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT R Lusby (4) Solution Methods /37

One Variable Unconstrained Optimization (jg Let s consider the simplest case: Unconstrained optimization with just a single variable x, where the differentiable function to be maximized is concave The necessary and sufficient condition for a particular solution x = x to be a global maximum is: df dx = 0 x = x If this can be solved directly you are done What if it cannot be solved easily analytically? We can utilize search procedures to solve it numerically Find a sequence of trial solutions that lead towards the optimal solution R Lusby (4) Solution Methods 3/37

Bisection Method One Variable Unconstrained Optimization(jg Can always be applied when f (x) concave It can also be used for certain other functions If x denotes the optimal solution, all that is needed is that df dx > 0 df dx = 0 df dx < 0 if x < x if x = x if x > x These conditions automatically hold when f (x) is concave The sign of the gradient indicates the direction of improvement R Lusby (4) Solution Methods 4/37

Example (jg f (x) df dx = 0 x x R Lusby (4) Solution Methods 5/37

Bisection Method Overview(jg Bisection Given two values, x < x, with f (x) > 0, f (x) < 0 Find the midpoint ˆx = x+x Find the sign of the slope of the midpoint The next two values are: x = ˆx if f (ˆx) < 0 x = ˆx if f (ˆx) > 0 What is the stopping criterion? R Lusby (4) Solution Methods 6/37

Bisection Method Overview(jg Bisection Given two values, x < x, with f (x) > 0, f (x) < 0 Find the midpoint ˆx = x+x Find the sign of the slope of the midpoint The next two values are: x = ˆx if f (ˆx) < 0 x = ˆx if f (ˆx) > 0 What is the stopping criterion? x x < ɛ R Lusby (4) Solution Methods 6/37

Bisection Method Example(jg The Problem maximize f (x) = x 3x 4 x 6 R Lusby (4) Solution Methods 7/37

Bisection Method What does the function look like?(jg 9 8 7 6 5 4 3 0.5.0.5 R Lusby (4) Solution Methods 8/37

Bisection Method Calculations(jg Iteration f (ˆx) x x ˆx f (ˆx) 0 0 7.0000 -.00 0 0.5 5.78 0. 0.5 0.75 7.6948 3 4.09 0.75 0.875 7.8439 4 -.9 0.75 0.875 0.85 7.867 5.3 0.85 0.875 0.84375 7.889 6-0.34 0.85 0.84375 0.885 7.885 7 0.5 0.885 0.84375 0.8359375 7.8839 x 0.836 0.885 < x < 0.84375 f (x ) = 7.8839 R Lusby (4) Solution Methods 9/37

Bisection Method Continued(jg Intuitive and straightforward procedure Converges slowly An iteration decreases the difference between the bounds by one half Only information on the derivative of f (x) is used More information could be obtained by looking at f (x) R Lusby (4) Solution Methods 0/37

Newton Method Introduction(jg Basic Idea: Approximate f (x) within the neighbourhood of the current trial solution by a quadratic function This approximation is obtained by truncating the Taylor series after the second derivative f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) Having set x i at iteration i, this is just a quadratic function of x i+ Can be maximized by setting its derivative to zero R Lusby (4) Solution Methods /37

Newton Overview max f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) f (x i+ ) f (x i ) + f (x i )(x i+ x i ) What is the stopping criterion? x i+ = x i f (x i ) f (x i ) R Lusby (4) Solution Methods /37

Newton Overview max f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) f (x i+ ) f (x i ) + f (x i )(x i+ x i ) What is the stopping criterion? x i+ = x i f (x i ) f (x i ) x i+ x i < ɛ R Lusby (4) Solution Methods /37

Same Example (jg The Problem maximize: f (x) = x 3x 4 x 6 f (x) = x 3 x 5 f (x) = 36x 3 60x 4 x i+ = x i + x 3 x 5 3x 3 + 5x 4 R Lusby (4) Solution Methods 3/37

Newton Method The function again(jg 9 8 7 6 5 4 3 0.5.0.5 R Lusby (4) Solution Methods 4/37

Newton Method Calculations(jg Iteration x i f (x i ) f (x i ) f (x i ) f (ˆx) 7 - -96 0.875 0.875 7.8439 -.940-6.733 0.84003 3 0.84003 7.8838-0.35-55.79 0.83763 4 0.83763 7.8839-0.0006-54.790 0.8376 x = 083763 f (x ) = 7.8839 R Lusby (4) Solution Methods 5/37

Several variables Newton still works(jg Newton: Multivariable Given x, the next iterate maximizes the quadratic approximation f (x ) + f (x )(x x ) + (x x ) T H(x ) (x x ) x = x H(x ) f (x ) T Gradient search The next iterate maximizes f along the gradient ray maximize: g(t) = f (x + t f (x ) T ) s.t. t 0 x = x + t f (x ) T R Lusby (4) Solution Methods 6/37

Example (jg The Problem maximize: f (x, y) = xy + y x y R Lusby (4) Solution Methods 7/37

Example Continued(jg The vector of partial derivatives is given as ( f f (x) =, f,..., f ) x x x n Here f = y x x f = x + 4y y Suppose we select the point (x,y)=(0,0) as our initial point f (0, 0) = (0, ) R Lusby (4) Solution Methods 8/37

Example Continued(jg Perform an iteration x = 0 + t(0) = 0 y = 0 + t() = t Substituting these expressions in f (x) we get Differentiate wrt to t f (x + t f (x)) = f (0, t) = 4t 8t d dt (4t 8t ) = 4 6t = 0 Therefore t = 4, and x = (0, 0) + 4 (0, ) = ( 0, ) R Lusby (4) Solution Methods 9/37

Example Perform a second iteration(jg Gradient at x = (0, ) is f (0, ) = (, 0) Determine step length ( x = 0, ) + t(, 0) Substituting these expressions in f (x) we get ( f (x + t f (x)) = f t, ) = t t + Differentiate wrt to t d dt (t t + ) = t = 0 Therefore t =, and x = ( 0, ) + (, 0) = (, ) R Lusby (4) Solution Methods 0/37

y (, 3 4 ( 3 4, ) ( 7 7 8 8, 8) 7 ) ( 3 4, 4) 3 ( ( 0, ), ) x R Lusby (4) Solution Methods /37

Quadratic programming (jg maximize: c T x xt Qx subject to: Ax b (λ) x 0 (µ) Q is symmetric and positive semidefinite Lagrangian: L(x, λ, µ) = c T x xt Qx + λ T (b Ax) + µ T x Applying KKT yields Qx + A T λ µ = c Ax + v = b x, λ, µ, v 0 x T µ = 0, λ T v = 0 complementarity constraints R Lusby (4) Solution Methods /37

Example (jg Problem minimize: f (x) subject to: g (x) 0 g (x) 0 KKT Conditions f (x) + u g (x) + u g (x) = 0 u g (x) = 0 u 0 u g (x) = 0 u 0 R Lusby (4) Solution Methods 3/37

QP solution (jg KKT Conditions Qx + A T λ µ = c Ax + v = b x, λ, µ, v 0 x T µ = 0 λ T v = 0 Add artificial variables to constraints with positive c j Subtract artificial variables to constraints with negative b i Initial basic variables are artificial variables some of µ, and some of v Do phase of the Simplex method with a restricted entry rule, Ensures complementarity constraints are always satisfied R Lusby (4) Solution Methods 4/37

Example (jg The Problem maximize: 5x + 30y + 4xy x 4y subject to: x + y 30 x, y 0 R Lusby (4) Solution Methods 5/37

Example Continued(jg The Parameters are as follows: [ ] [ ] [ 5 c =, A =, x = 30 x y ], Q = [ 4 4 4 8 ], b = 30 The Non-Linear Component of the objective function is xt Qx = x + 4xy 4y R Lusby (4) Solution Methods 6/37

Example New Linear Program(jg Minimize: Z = z + z subject to: 4x 4y + λ µ + z = 5 Complementarity Conditions 4x + 8y + λ µ + z = 30 x + y + v = 30 x, y, λ, µ, µ, v, z, z 0 xµ = 0, yµ = 0, v λ = 0 R Lusby (4) Solution Methods 7/37

Modified Simplex Initial Tableau(jg BV Z x y λ µ µ v z z rhs Z - 0-4 -3 0 0 0-45 z 0 4-4 - 0 0 0 5 z 0-4 8 0-0 0 30 v 0 0 0 0 0 0 30 R Lusby (4) Solution Methods 8/37

Modified Simplex First Pivot(jg BV Z x y λ µ µ v z z rhs Z - - 0-0 0-30 z 0 0 - - 0 30 y 0-4 0-8 0 0 8 v 0 0-0 4 0-4 30 8 45 R Lusby (4) Solution Methods 9/37

Modified Simplex Second Pivot(jg BV Z x y λ µ µ v z z rhs Z - 0 0-5 3 4 0 z 0 0 0 5 - - 3 4-3 y 0 0 8 0-6 x 0 0-4 0 8 4-5 5 4 75 4 0 6 8 0-45 8 4 R Lusby (4) Solution Methods 30/37

Modified Simplex Final Tableau(jg BV Z x y λ µ µ v z z rhs Z - 0 0 0 0 0 0 0 λ 0 0 0-5 - 3 3 y 0 0 0 0 - x 0 0 0-0 0-5 5 3 40 0-0 0 5 0 3 40 9 0-0 R Lusby (4) Solution Methods 3/37

Separable Programming (jg The Problem maximize: j f j(x j ) subject to: Ax b x 0 Each f j is approximated by a piece-wise linear function f (y) = s y + s y + s 3 y 3 y = y + y + y 3 0 y u 0 y u 0 y 3 u 3 R Lusby (4) Solution Methods 3/37

Separable Programming Continued(jg Special restrictions: y = 0 whenever y < u y 3 = 0 whenever y < u If each f j is concave, Why? Automatically satisfied by the simplex method R Lusby (4) Solution Methods 33/37

Sequential Unconstrained Minimization Technique (jg The Problem maximize: f (x) subject to: g(x) b x 0 For a sequence of decreasing positive r s, solve maximize f (x) rb(x) B is a barrier function approaching as a feasible point approaches the boundary of the feasible region For example B(x) = i b i g i (x) + j x j R Lusby (4) Solution Methods 34/37

The Problem maximize: xy subject to: x + y 3 x, y 0 r x y 0.90.36 0 0.987.95 0 4 0.998.993 Class exercise Verify that the KKT conditions are satisfied at x = & y = R Lusby (4) Solution Methods 35/37

Class exercises (jg Separable programming maximize: 3x x 4 + 4y y subject to: x + y 9 x, y 0 Formulate this as an LP model using x = 0,,, 3 and y = 0,,, 3 as breakpoints for the approximating piece-wise linear functions R Lusby (4) Solution Methods 36/37

Class exercises Continued(jg Sequential Unconstrained Minimization Technique If f is concave and g i is convex i, show that the following is concave f (x) r ( b i g i (x) + x j i j R Lusby (4) Solution Methods 37/37