Optimization over a polyhedron
|
|
- Malcolm Reynolds
- 5 years ago
- Views:
Transcription
1 32
2 Chapter 5 Optimization over a polyhedron A significant class of mathematical programming problems with convex feasible region is the case of polyhedron. Apply of course to this problem of the same optimality conditions described in paragraph4; however linearity allowws to get more specific conditions. Assume without loss of generality that S = {x R n : Ax b} where A is a m n real matrix and b R m. The problem under consideration is min f (x Ax b (P POL where f is continuously differentiable. 5.1 Feasible direction on a polyhedron Consider the following defintion: Definition 5.2 (Binding (or active constraints let S = {x R n : Ax b} and x S. If a T i x = b i we say that the i-th constraint is binding (or active in x. For any x S we denote by I( x the index set of all the active constraints in x, namely: I( x = {i {1,...,m} : a T i x = b i } Active constraint defitino can be given for polyhedron in a general form. Given S = {x R n : Ax = b}, we have that I( x = {1,...,m}. 33
3 34 CHAPTER 5. OPTIMIZATION OVER A POLYHEDRON Gievna polyhedorn in standard form S = {x R n : Ax = b, x 0}, we have I( x = {1,...,m} { j {1,...,n} : x j = 0}. Theorem 5.3 (Feasible directions Let S = {x R n : Ax b} and x S with I( x = {i {1,...,m} : a T i x = b i }. A vector d is a feasible direction x if and only if a T i d 0, for all i I( x. (5.1 Furthemore givena feasible direction d, the point x +t d is feasible for t satisfying 0 < t t max = min j / I( x: a T j d<0 a T j x b j d a T j (5.2 where in case { j / I( x : a T j d < 0} is empty, t max =. Proof. Let x feasible, i.e. a T i x b i, i = 1,...,m. Consider x = x +td with t > 0. We look for conditions on d which guarantee that a T i ( x +td = a T i x +ta T i d b i for t [0,t max. For any i I( x, we have a T i x = b i hence a T i x +ta T i d = b i +ta T i d. So that for all i I( x: a T i x +ta T i d b i if and only if a T i d 0. Consider j / I( x, so that a T j x > b i hence a T j x +tat j d > b j +ta T j d. We have two cases: either 1. a T j d 0 or 2. at j d < 0. If a T j d 0, then b j +ta T j d b j for all t > 0. if a T j d < 0, a sufficiently small stepsize t must be chosen such that a T j x t at j d b j for t > 0. In any case we do not have condition on d but only on t. In particular the value of t max can be found by considering only the non active constraints such that a T j d < 0 and solving the system in t We get a T j ( x +td = a T j x +ta T j d = a T j x t a T j d b j j I( x and such that a T j d < 0. Hence we get (5.2. t at j x b j a T i d j I( x. Let A I( x be the I( x n submatrix of A made up of the rows with index in I( x, A I( x = (a T i i I( x.
4 5.1. FEASIBLE DIRECTION ON A POLYHEDRON 35 Let x such that A x b, feasible directions in x are the solution of the linear system of inequalities A I( x d 0 where A I( x = (a T i i I( x and I( x = {i : a T i x = b i }. We can give a characterization also form polyhedron in a different form. Consider first the case of S = = {x R n : Ax = b} and let x S =. By definition a feasible direction d satisfies A( x +td = b for sufficiently small t. We get A x +tad = b +tad = b Ad = 0 for all t. We note that in this case both +d and d are feasible directions. Proposition 5.4 Given S = = {x R n : Ax = b} and x S =. A vector d R n is a feasible direction in x if and only if Ad = 0. (5.3 As an example consider x 1 + x 2 x 3 = 1 x 1 x 2 = 2 (5.4 Then Ad = 0 is ( d 1 d 2 d 3 = which has infinite solution of the type d = (t, t, 2t T = t(1, 1, 2 T with t R. Theorem 5.5 (Faesible direction over a standard polyhedron Given S = {x R n : Ax = b, x 0} and x S. let J( x = { j {1,...,n} : x j = 0}. A vector d R n is a feasible direction in x if and only if Ad = 0, (5.5 d j 0 per ogni j J( x.
5 36 CHAPTER 5. OPTIMIZATION OVER A POLYHEDRON 5.6 Optimality conditions over Ax = b Consideri a problem of the type min f (x Ax = b where A is a m n matrix with rows a T j, and b Rm with components b j. (P EQ Proposition 5.7 Let x S be a local minimizer of problem (P-EQ and assume that f is continuously differentiable overr n. Then it holds: f (x T d 0, for all d R n : Ad = 0. We can enter more into details of the condition above. Assume to this aim that the matrix A has rank equal to m. This assumption is purely simplifying. Indeed the result which is obtained is valid also in the general case, but the proof is more complex. Let A 1,A 2,...,A n be the columns of A, so that the constraints can be written as: Further we observe that A 1 x 1 + A 2 x A n x n b = 0. (5.6 (Ax b = A T ; We can extract from the matrix A, m linearly independent columns and aftere a reordering we assume that they are the first m, A 1,A 2,...,A m. We define B = [A 1,A 2,...,A m ], N = [A m+1,a m+2...,a n ], A = (B N x 1 x m+1 d 1 d m+1 x B = x 2..., x N = x m+2..., d B = d 2..., d N = d m+2..., x m x n d m d n where B is a square non singular m m matrix. B is called basis matrix, and N non basis matrix; the vector x B is called basic variable, whereas x N non basic variable. Accordingly the vector d is partitioned into d B basic direction and d N non basic direction. Constraint (5.6 becomes: Bx B + Nx N b = 0, (5.7 and (5.3: Since B is non singular, from (5.8 we get: Bd B + Nd N = 0. (5.8 d B = B 1 Nd N ; (5.9
6 5.6. OPTIMALITY CONDITIONS OVER AX = B 37 Hence with reference to a basis B, any feasibkle direction can be expressed as [ B d = 1 ] Nd N, (5.10 where d N R (n m can take any value. As an example consider system (5.4; we have B = ( ( 1, N = 0 d N ( x1, x B = x 2 ( d1, x N = x 3, d B = d 2, d N = d 3. Hence we get d B = ( d1 d 2 ( 1 1 = ( 1 0 ( 1/2 d 3 = 1/2 d 3. By partitioning also the gradient of f as follows: [ f (x B f (x = N f (x ], (5.11 where we denote by B f (x the vector of f x i with i B and N f (x the vector of f x i with i N. The necessary condition (5.7 can be written as: B f (x T d B + N f (x T d N 0, per ogni d B R m,e d N R (n m : Bd B + Nd N = 0. Using (5.9 we get Hence: which gives B f (x T B 1 Nd N + N f (x T d N 0,for all d N R n m. ( B f (x T B 1 N + N f (x T d N 0, for all d N R n m. (5.12 ( N f (x N T (B 1 T B f (x T d N 0, for all d N R n m. Reasoning as in the unconstrained case (see Theorem 3.18, the preceding inequality holds for all d N R n m when: N f (x N T (B 1 T B f (x = 0; (5.13 Indeed otherwise, taking d N = [ N f (x N T (B 1 T B f (x ], inequality would not be satisfied. equation (5.13 is the gradient of the fucntion in the only variables x N R n m. Indeed using x B = B 1 b B 1 Nx N,
7 38 CHAPTER 5. OPTIMIZATION OVER A POLYHEDRON problem (P-EQ can be written as min f x N R (B 1 b B 1 Nx N,x N n m which is an unconstrained problem in the variables x N of the composite function f (x B (x N,x N. Such condition can be expressed using the defintion of Lagrangian function for Problem (P- EQ which is m L(x, µ = f (x + µ j (a T j x b j, j=1 or, using vector notation, Finally we get the theorem. L(x, µ = f (x + µ T (Ax b. Proposition 5.8 (First order optimality condition over Ax = b Let x S be a local minimizer of problem (P-CONV and assume that f is continuously differentiable overr n. Then multipliers µ 1, µ 2,... µ m exist such that: or, using vector notation, x L(x, µ = f (x + m µ j a j = 0, (5.14 j=1 x L(x, µ = f (x + A T µ = 0. (5.15 proof. Condition (5.15 can be written using the partition B,N as ( B f (x ( B T N f (x + µ = 0. which is From the first equation we get µ N T B f (x + B T µ = 0, N f (x + N T µ = 0. and substituting back into the second equation we get (5.13. µ = (B 1 T B f (x, (5.16 Given a minimization problem with only equality constraints Ax = b, candaites to be local minimizer are stationary point of the Lagrangian function x L(x, µ = f (x + A T µ = 0. µ L(x, µ = Ax b = 0
8 5.9. OPTIMALITY CONDITION OVER A POLYHEDRON Optimality condition over a polyhedron Using the defintion of feasible direction, we can apply Theorem 3.7 to get Theorem 5.10 (First order necessary condition overa polyhedron If x is a local minimizer of problem (P-POLthen we have f (x T d 0 per ogni direzione d R n : a T i d 0, i I(x = {i : a T i x = b i }. This condition can be state as a non existence of a solution of a linear system. Indeed we can write If x is a local minimizer of problem (P-POLthen there exists NO solution d R n of the system A I(x d 0, f (x T d < 0. (5.17 Example 5.11 Consider the problem min (x (x x 1 2x x 1 x 2 12 x 1 0, x 2 0 And let x = (0, 12 T be a minimizer (see Figure 5.1. The gradient of the objective function is ( 2(x f (x = 2(x 2 12 so that f ( x = (20, 0 T. The system (5.17 is written as 2d 1 d 2 0 d d 1 < 0 (5.18 which does not have a solution. Under convexity assumption on the objective function we get the following (since the feasible region is convex too
9 40 CHAPTER 5. OPTIMIZATION OVER A POLYHEDRON x 2 x 1 Figure 5.1: Poliedro Esempio Theorem 5.12 (Necessary and sufficient conditon for convex problem Let f (x e a convex function overr n. A point x S is a global solution for problem (P-POL if and only if f (x T d 0 for all d R n : a T i d 0, i I(x = {i : a T i x = b i }. which can be written as Let f (x e a convex function overr n. A point x S is a globalsolution for problem (P-POL if and only if there exists NO solution d R n of the system A I d 0, f (x T d < 0. (5.19 Example 5.13 Consider again Example The objective functi is strictly convex Hence the point (0, 12 T which satisfies the condition is a global minimizer.
10 5.9. OPTIMALITY CONDITION OVER A POLYHEDRON 41 x 2 x x 1 Figure 5.2: Poliedro Esempio We can extend the conditions above to the case of polyhedron described by inequality and equality using characterization of feasible direction given in Section 5.1. Consider the problem min f (x a T i x b i a T j x = b j i = 1,...,m j = 1,..., p (5.20 Let x S be a local minimizer for problem (5.20 then there exists no solution d R n to the linear system a T i d 0 i I(x {1,...,m} a T j d = 0 j = 1,..., p f (x T d < 0. An important case is the problem min f (x Ax = b x 0 (P POL ST
11 42 CHAPTER 5. OPTIMIZATION OVER A POLYHEDRON In this case using Theorem 5.5, we get: Let x S be a local minimizer for problem (P-POL-ST then there exists no solution d R n to the linear system Ad = 0, with J(x = { j {1,...,n} : x j = 0}. d J(x 0 f (x T d < 0. (5.21 Of ocurse all the necessary conditions above become also sufficient if the objective fuction f is ocnvex. Consider now Linear Programming problems min c T x Ax b. (PL Using theorem 5.12, we get the following condition. A point x S be a global minimizer for problem (PL if and only if there exists no solution d R n to the linear system A I(x d 0, c T (5.22 d < 0. Analogously if we have problem of the type min c T x Ax = b x 0, (PL ST we get A point x S be a global minimizer for problem (PL-ST if and only if there exists no solution d R n to the linear system Ad = 0, con J(x = { j {1,...,n} : x j = 0}. d J(x 0 c T d < 0. (5.23 Non existence conditions of solution of linear system formulated above can be formulate as the existence conditions of an alternative system, as discussed in Chapter 6.
The Karush-Kuhn-Tucker conditions
Chapter 6 The Karush-Kuhn-Tucker conditions 6.1 Introduction In this chapter we derive the first order necessary condition known as Karush-Kuhn-Tucker (KKT) conditions. To this aim we introduce the alternative
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationLinear programs, convex polyhedra, extreme points
MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationMATHEMATICAL PROGRAMMING I
MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More information3. THE SIMPLEX ALGORITHM
Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationOPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003
OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationMATH 4211/6211 Optimization Constrained Optimization
MATH 4211/6211 Optimization Constrained Optimization Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Constrained optimization
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More informationConsensus building: How to persuade a group
Consensus building: How to persuade a group Bernard Caillaud and Jean Tirole May 7, 007 Additional materials Proof of Lemma C1 Referring to Appendix C and using feasibility constraints, note that a mechanism
More informationLecture 2: The Simplex method
Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationNumerical Optimization
Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationLinear Programming. Operations Research. Anthony Papavasiliou 1 / 21
1 / 21 Linear Programming Operations Research Anthony Papavasiliou Contents 2 / 21 1 Primal Linear Program 2 Dual Linear Program Table of Contents 3 / 21 1 Primal Linear Program 2 Dual Linear Program Linear
More informationSecond Order Optimality Conditions for Constrained Nonlinear Programming
Second Order Optimality Conditions for Constrained Nonlinear Programming Lecture 10, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk)
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationComputational Optimization. Constrained Optimization Part 2
Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints
More information1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0
MA Linear Programming Tutorial 3 Solution. Consider the following polyhedron of an LP problem: x x + x 3 = ( 3x + x 4 ( 7x 4x 3 + x 4 4 (3 x, x, x 4 Identify all active constraints at each of the following
More informationCO350 Linear Programming Chapter 5: Basic Solutions
CO350 Linear Programming Chapter 5: Basic Solutions 30th May 005 Chapter 5: Basic Solutions 1 Last week, we learn Recap Definition of a basis B of matrix A. Definition of basic solution x of Ax = b determined
More informationLecture 4: Optimization. Maximizing a function of a single variable
Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationIntroduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.
More informationDetermining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.
Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationNotes on Constrained Optimization
Notes on Constrained Optimization Wes Cowan Department of Mathematics, Rutgers University 110 Frelinghuysen Rd., Piscataway, NJ 08854 December 16, 2016 1 Introduction In the previous set of notes, we considered
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationPaul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions
UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite
More informationLinear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence
Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationSolutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.
Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationMultiplying two generating functions
Generating Functions 3.3 3.6 111 Multiplying two generating functions Let A(x) = a k x k and B(x) = b k x k. k 0 k 0 What is the coefficient of x k in A(x)B(x)? Generating Functions 3.3 3.6 111 Multiplying
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More information8. Constrained Optimization
8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationInstructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.
Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.. Recall that P 3 denotes the vector space of polynomials of degree less
More informationMatrix Operations: Determinant
Matrix Operations: Determinant Determinants Determinants are only applicable for square matrices. Determinant of the square matrix A is denoted as: det(a) or A Recall that the absolute value of the determinant
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationIntroduction to Integer Linear Programming
Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationPractice Exam 1: Continuous Optimisation
Practice Exam : Continuous Optimisation. Let f : R m R be a convex function and let A R m n, b R m be given. Show that the function g(x) := f(ax + b) is a convex function of x on R n. Suppose that f is
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationMath 273a: Optimization The Simplex method
Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationConstraint qualifications for nonlinear programming
Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationThe convergence of stationary iterations with indefinite splitting
The convergence of stationary iterations with indefinite splitting Michael C. Ferris Joint work with: Tom Rutherford and Andy Wathen University of Wisconsin, Madison 6th International Conference on Complementarity
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationProduct of P-polynomial association schemes
Product of P-polynomial association schemes Ziqing Xiang Shanghai Jiao Tong University Nov 19, 2013 1 / 20 P-polynomial association scheme Definition 1 P-polynomial association scheme A = (X, {A i } 0
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationA proof of a partition conjecture of Bateman and Erdős
proof of a partition conjecture of Bateman and Erdős Jason P. Bell Department of Mathematics University of California, San Diego La Jolla C, 92093-0112. US jbell@math.ucsd.edu 1 Proposed Running Head:
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationMathematics 530. Practice Problems. n + 1 }
Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined
More information