Part IB Optimisation

Similar documents
Part IB Optimisation

Linear and Combinatorial Optimization

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Convex Optimization & Lagrange Duality

Lectures 6, 7 and part of 8

Optimization 4. GAME THEORY

Today: Linear Programming (con t.)

Linear and non-linear programming

4. Algebra and Duality

Lecture 7 Duality II

Optimization - Examples Sheet 1

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lecture 18: Optimization Programming

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

CS-E4830 Kernel Methods in Machine Learning

5. Duality. Lagrangian

Convex Optimization and SVM

Part 1. The Review of Linear Programming

EE364a Review Session 5

Constrained Optimization and Lagrangian Duality

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Convex Optimization Boyd & Vandenberghe. 5. Duality

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Lecture 6: Conic Optimization September 8

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Lecture: Duality.

Optimality, Duality, Complementarity for Constrained Optimization

Game Theory: Lecture 3

CS711008Z Algorithm Design and Analysis

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Convex Optimization M2

Convex Optimization and Modeling

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Applications of Linear Programming

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Convex Optimization and Support Vector Machine

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

4.6 Linear Programming duality

Math 5593 Linear Programming Week 1

CS711008Z Algorithm Design and Analysis

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

ICS-E4030 Kernel Methods in Machine Learning

Review Solutions, Exam 2, Operations Research

Lecture: Duality of LP, SOCP and SDP

IE 5531: Engineering Optimization I

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

Linear Programming Duality

LINEAR AND NONLINEAR PROGRAMMING

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Summary of the simplex method

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Optimisation and Operations Research

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Primal/Dual Decomposition Methods

4TE3/6TE3. Algorithms for. Continuous Optimization

Chapter 1 Linear Programming. Paragraph 5 Duality

EE/AA 578, Univ of Washington, Fall Duality

Duality Theory, Optimality Conditions

Chapter 9. Mixed Extensions. 9.1 Mixed strategies

LINEAR PROGRAMMING II

Generalization to inequality constrained problem. Maximize

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Lecture 3: Semidefinite Programming

3. THE SIMPLEX ALGORITHM

Support Vector Machines

CS711008Z Algorithm Design and Analysis

Lagrangian Duality and Convex Optimization

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Lagrangian Duality Theory

Additional Homework Problems

LECTURE 10 LECTURE OUTLINE

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lecture 8. Strong Duality Results. September 22, 2008

Sensitivity Analysis and Duality in LP

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

1 Review Session. 1.1 Lecture 2

Interior Point Algorithms for Constrained Convex Optimization

Mathematics of Operational Research

Lecture 7: Convex Optimizations

MATH 829: Introduction to Data Mining and Analysis Support vector machines and kernels

Theory and Internet Protocols

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Transcription:

Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. Lagrangian methods General formulation of constrained problems; the Lagrangian sufficiency theorem. Interpretation of Lagrange multipliers as shadow prices. Examples. [2] Linear programming in the nondegenerate case Convexity of feasible region; sufficiency of extreme points. Standardization of problems, slack variables, equivalence of extreme points and basic solutions. The primal simplex algorithm, artificial variables, the two-phase method. Practical use of the algorithm; the tableau. Examples. The dual linear problem, duality theorem in a standardized case, complementary slackness, dual variables and their interpretation as shadow prices. Relationship of the primal simplex algorithm to dual problem. Two person zero-sum games. [6] Network problems The Ford-Fulkerson algorithm and the max-flow min-cut theorems in the rational case. Network flows with costs, the transportation algorithm, relationship of dual variables with nodes. Examples. Conditions for optimality in more general networks; *the simplex-on-a-graph algorithm*. [3] Practice and applications *Efficiency of algorithms*. The formulation of simple practical and combinatorial problems as linear programming or network problems. [1] 1

Contents IB Optimisation (Theorems) Contents 1 Introduction and preliminaries 3 1.1 Constrained optimization...................... 3 1.2 Review of unconstrained optimization............... 3 2 The method of Lagrange multipliers 4 2.1 Complementary Slackness...................... 4 2.2 Shadow prices............................. 4 2.3 Lagrange duality........................... 4 2.4 Supporting hyperplanes and convexity............... 4 3 Solutions of linear programs 5 3.1 Linear programs........................... 5 3.2 Basic solutions............................ 5 3.3 Extreme points and optimal solutions............... 5 3.4 Linear programming duality..................... 5 3.5 Simplex method........................... 5 3.5.1 The simplex tableau..................... 5 3.5.2 Using the Tableau...................... 5 3.6 The two-phase simplex method................... 5 4 Non-cooperative games 6 4.1 Games and Solutions......................... 6 4.2 The minimax theorem........................ 6 5 Network problems 7 5.1 Definitions............................... 7 5.2 Minimum-cost flow problem..................... 7 5.3 The transportation problem..................... 7 5.4 The maximum flow problem..................... 7 2

1 Introduction and preliminaries IB Optimisation (Theorems) 1 Introduction and preliminaries 1.1 Constrained optimization 1.2 Review of unconstrained optimization Lemma. Let f be twice differentiable. Then f is convex on a convex set S if the Hessian matrix 2 f Hf ij = x i x j is positive semidefinite for all x S, where this fancy term means: Theorem. Let X R n be convex, f : R n R be twice differentiable on X. If x X satisfy f(x ) = 0 and Hf(x) is positive semidefinite for all x X, then x minimizes f on X. 3

2 The method of Lagrange multipliers IB Optimisation (Theorems) 2 The method of Lagrange multipliers Theorem (Lagrangian sufficiency). Let x X and λ R m be such that L(x, λ ) = inf x X L(x, λ ) and h(x ) = b. Then x is optimal for (P ). In words, if x minimizes L for a fixed λ, and x satisfies the constraints, then x minimizes f. 2.1 Complementary Slackness 2.2 Shadow prices Theorem. Consider the problem minimize f(x) subject to h(x) = b. Here we assume all functions are continuously differentiable. Suppose that for each b R n, φ(b) is the optimal value of f and λ is the corresponding Lagrange multiplier. Then φ b i = λ i. 2.3 Lagrange duality Theorem (Weak duality). If x X(b) (i.e. x satisfies both the functional and regional constraints) and λ Y, then g(λ) f(x). In particular, sup g(λ) inf f(x). λ Y x X(b) 2.4 Supporting hyperplanes and convexity Theorem. (P ) satisfies strong duality iff φ(c) = hyperplane at b. inf f(x) has a supporting x X(c) Theorem (Supporting hyperplane theorem). Suppose that φ : R m R is convex and b R m lies in the interior of the set of points where φ is finite. Then there exists a supporting hyperplane to φ at b. Theorem. Let φ(b) = inf {f(x) : h(x) b} x X If X, f, h are convex, then so is φ (assuming feasibility and boundedness). Theorem. If a linear program is feasible and bounded, then it satisfies strong duality. 4

3 Solutions of linear programs IB Optimisation (Theorems) 3 Solutions of linear programs 3.1 Linear programs 3.2 Basic solutions Theorem. A vector x is a basic feasible solution of Ax = b if and only if it is an extreme point of the set X(b) = {x : Ax = b, x 0}. 3.3 Extreme points and optimal solutions Theorem. If (P ) is feasible and bounded, then there exists an optimal solution that is a basic feasible solution. 3.4 Linear programming duality Theorem. The dual of the dual of a linear program is the primal. Theorem. Let x and λ be feasible for the primal and the dual of the linear program in general form. Then x and λ and optimal if and only if they satisfy complementary slackness, i.e. if 3.5 Simplex method 3.5.1 The simplex tableau 3.5.2 Using the Tableau (c T λ T A)x = 0 and λ T (Ax b) = 0. 3.6 The two-phase simplex method 5

4 Non-cooperative games IB Optimisation (Theorems) 4 Non-cooperative games 4.1 Games and Solutions Theorem (Nash, 1961). Every bimatrix game has an equilibrium. 4.2 The minimax theorem Theorem (von Neumann, 1928). If P R m n. Then max min x X y Y Note that this is equivalent to max min x X y Y p(x, y) = min max y Y x X p(x, y) = max min y Y x X p(x, y). p(x, y). The left hand side is the worst payoff the row player can get if he employs the minimax strategy. The right hand side is the worst payoff the column player can get if he uses his minimax strategy. The theorem then says that if both players employ the minimax strategy, then this is an equilibrium. Theorem. (x, y) X Y is an equilibrium of the matrix game with payoff matrix P if and only if min p(x, y Y y ) = max min x X y Y p(x, y ) max x X p(x, y) = min max y Y x X p(x, u ) i.e. the x, y are optimizers for the max min and min max functions. 6

5 Network problems IB Optimisation (Theorems) 5 Network problems 5.1 Definitions 5.2 Minimum-cost flow problem 5.3 The transportation problem Theorem. Every minimum cost-flow problem with finite capacities or nonnegative costs has an equivalent transportation problem. 5.4 The maximum flow problem Theorem (Max-flow min-cut theorem). Let δ be an optimal solution. Then δ = min{c(s) : S V, 1 S, n V \ S} 7