DRAFT Formulation and Analysis of Linear Programs

Similar documents
Linear Programming. H. R. Alvarez A., Ph. D. 1

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved.

IE 5531: Engineering Optimization I

II. Analysis of Linear Programming Solutions

Applications of Systems of Linear Inequalities

Lectures 6, 7 and part of 8

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

Microeconomic Theory -1- Introduction


1.4 CONCEPT QUESTIONS, page 49

An introductory example

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples

Section 4.1 Solving Systems of Linear Inequalities

LINEAR PROGRAMMING MODULE Part 1 - Model Formulation INTRODUCTION

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

56:171 Operations Research Fall 1998

Practice Questions for Math 131 Exam # 1

Introduction to Operations Research Economics 172A Winter 2007 Some ground rules for home works and exams:

The Transportation Problem

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Introduction to linear programming using LEGO.

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Linear Programming Duality

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

Ex Post Cheap Talk : Value of Information and Value of Signals

UNIT-4 Chapter6 Linear Programming

Chapter 1: Linear Programming

Stochastic Optimization

Econ 172A, Fall 2012: Final Examination (I) 1. The examination has seven questions. Answer them all.

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Lecture 4. 1 Examples of Mechanism Design Problems

Linear programming: introduction and examples

LP Definition and Introduction to Graphical Solution Active Learning Module 2

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming

Advanced Microeconomics

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Linear Functions, Equations, and Inequalities

Transportation Problem

Introduction to Operations Research. Linear Programming

Graphical and Computer Methods

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Sensitivity Analysis and Duality in LP

56:171 Operations Research Final Exam December 12, 1994

Introduction to Operations Research

ECE 307- Techniques for Engineering Decisions

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm

MS-E2140. Lecture 1. (course book chapters )

MATH 445/545 Homework 1: Due February 11th, 2016

A Summary of Economic Methodology

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University

MAT016: Optimization

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

On the Approximate Linear Programming Approach for Network Revenue Management Problems

Chapter 2 Introduction to Optimization and Linear Programming

Linear Programming and Marginal Analysis

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Study Unit 3 : Linear algebra

Integer Programming and Branch and Bound

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

1 Primals and Duals: Zero Sum Games

MS-E2140. Lecture 1. (course book chapters )

Handout 4: Some Applications of Linear Programming

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

Systems and Matrices CHAPTER 7

2. Linear Programming Problem

1 Simplex and Matrices

Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples

Linear Programming CHAPTER 11 BASIC CONCEPTS AND FORMULA. Basic Concepts 1. Linear Programming

Today: Linear Programming (con t.)

Optimisation. 3/10/2010 Tibor Illés Optimisation

Deterministic Operations Research, ME 366Q and ORI 391 Chapter 2: Homework #2 Solutions

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

15-780: LinearProgramming

Sensitivity Analysis and Duality

Optimization 4. GAME THEORY

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

SELECT TWO PROBLEMS (OF A POSSIBLE FOUR) FROM PART ONE, AND FOUR PROBLEMS (OF A POSSIBLE FIVE) FROM PART TWO. PART ONE: TOTAL GRAND

1 Basic Game Modelling

Provable Approximation via Linear Programming

2 Functions and Their

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

Operations Research. Duality in linear programming.

Review Questions, Final Exam

Designing Competitive Online Algorithms via a Primal-Dual Approach. Niv Buchbinder

School of Business. Blank Page

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

LINEAR PROGRAMMING BASIC CONCEPTS AND FORMULA

1 Review Session. 1.1 Lecture 2

NOTES ON COOPERATIVE GAME THEORY AND THE CORE. 1. Introduction

Linear Programming: Chapter 5 Duality

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Math 381 Midterm Practice Problem Solutions

Recoverable Robustness in Scheduling Problems

Operations Research: Introduction. Concept of a Model

Transcription:

DRAFT Formulation and Analysis of Linear Programs Benjamin Van Roy and Kahn Mason c Benjamin Van Roy and Kahn Mason September 26, 2005 1

2

Contents 1 Introduction 7 1.1 Linear Algebra.......................... 8 1.2 Linear Programs......................... 9 1.3 Duality............................... 10 1.4 Two-Player Zero-Sum Games.................. 11 1.5 Network Flows.......................... 12 1.6 Markov Decision Problems.................... 13 1.7 Linear Programming Algorithms................. 14 2 Linear Algebra 15 2.1 Matrices.............................. 17 2.1.1 Vectors........................... 17 2.1.2 Addition.......................... 18 2.1.3 Transposition....................... 18 2.1.4 Multiplication....................... 19 2.1.5 Linear Systems of Equations............... 20 2.1.6 Partitioning of Matrices................. 20 2.2 Vector Spaces........................... 21 2.2.1 Bases and Dimension................... 23 2.2.2 Orthogonality....................... 26 2.2.3 Orthogonal Subspaces.................. 27 2.2.4 Vector Spaces Associated with Matrices......... 29 2.3 Linear Systems of Equations................... 31 2.3.1 Solution Sets....................... 33 2.3.2 Matrix Inversion..................... 34 2.4 Contingent Claims........................ 35 2.4.1 Structured Products and Market Completeness.... 38 2.4.2 Pricing and Arbitrage.................. 40 2.5 Exercises.............................. 41 3

4 3 Linear Programs 47 3.1 Graphical Examples....................... 48 3.1.1 Producing Cars and Trucks............... 48 3.1.2 Feeding an Army..................... 50 3.1.3 Some Observations.................... 50 3.2 Feasible Regions and Basic Feasible Solutions......... 52 3.2.1 Convexity......................... 52 3.2.2 Vertices and Basic Solutions............... 53 3.2.3 Bounded Polyhedra.................... 55 3.3 Optimality of Basic Feasible Solutions............. 56 3.3.1 Bounded Feasible Regions................ 57 3.3.2 The General Case..................... 57 3.3.3 Searching through Basic Feasible Solutions....... 58 3.4 Greater-Than and Equality Constraints............. 59 3.5 Production............................ 61 3.5.1 Single-Stage Production................. 61 3.5.2 Multi-Stage Production................. 64 3.5.3 Market Stratification and Price Discrimination..... 66 3.6 Contingent Claims........................ 68 3.6.1 Structured Products in an Incomplete Market..... 68 3.6.2 Finding Arbitrage Opportunities............ 71 3.7 Pattern Classification....................... 72 3.7.1 Linear Separation of Data................ 74 3.7.2 Minimizing Violations.................. 75 3.8 Notes................................ 75 3.9 Exercises.............................. 76 4 Duality 85 4.1 A Graphical Example....................... 85 4.1.1 Sensitivity Analysis.................... 87 4.1.2 Shadow Prices and Valuation of the Firm........ 89 4.1.3 The Dual Linear Program................ 90 4.2 Duality Theory.......................... 92 4.2.1 Weak Duality....................... 92 4.2.2 Strong Duality...................... 93 4.2.3 First Order Necessary Conditions............ 93 4.2.4 Complementary Slackness................ 96 4.3 Duals of General Linear Programs................ 97 4.4 Two-Player Zero-Sum Games.................. 100 4.5 Allocation of a Labor Force................... 104 4.5.1 Labor Minimization.................... 105

c Benjamin Van Roy and Kahn Mason 5 4.5.2 Productivity Implies Flexibility............. 106 4.5.3 Proof of the Substitution Theorem........... 107 4.6 Exercises.............................. 108 5 Network Flows 115 5.1 Networks.............................. 115 5.2 Min-Cost-Flow Problems..................... 116 5.2.1 Shortest Path Problems................. 118 5.2.2 Transportation Problems................. 118 5.2.3 Assignment Problems................... 119 5.3 Max-Flow Problems........................ 121 5.3.1 Min-Cut Problems.................... 121 5.3.2 Matching Problems.................... 123 5.4 Exercises.............................. 124

6

Chapter 1 Introduction Optimization is the process of selecting the best among a set of alternatives. An optimization problem is characterized by a set of feasible solutions and an objective function, which assigns a measure of utility to each feasible solution. A simple approach to optimization involves listing the feasible solutions, applying the objective function to each, and choosing one that attains the optimum, which could be the maximum or minimum, depending on what we are looking for. However, this approach does not work for most interesting problems. The reason is that there are usually too many feasible solutions, as we illustrate with the following example. Example 1.0.1. (Task Assignment) Suppose we are managing a group of 20 employees and have 20 tasks that we would like to complete. Each employee is capable of completing only a subset of the tasks. Furthermore, each employee can only be assigned one task. We need to decide on which task to assign to each employee. The set of feasible solutions here is the set of 20! possible assignments of the 20 tasks to the 20 employees. Suppose our objective is to maximize the number of tasks accomplished. One way of doing this is to enumerate the 20! possible assignments, calculate the number of tasks that would be completed in each case, and choose an assignment that maximizes this quantity. However, 20! is a huge number its greater than a billion billion. Even if we used a computer that could assess 1 billion assignments per second, iterating through all of them would take more than 75 years! Fortunately, solving an optimization problem does not always require iterating through and assessing every feasible solution. This book is about linear programs a class of optimization problems that can be solved very quickly by numerical algorithms. In a linear program, solutions are encoded in terms of real-valued decision variables. The objective function is linear 7

8 in the decision variables, and the feasible set is constrained by linear inequalities. Many important practical problems can be formulated as linear programs. For example, the task-assignment problem of Example 1.0.1 can be formulated as a linear program and solved in a small fraction of a second by a linear programming algorithm running on just about any modern personal computer. This book represents a first course on linear programs, intended to serve engineers, economists, managers, and other professionals who think about design, resource allocation, strategy, logistics, or other decisions. The aim is to develop two intellectual assets. The first is an ability to formulate a wide variety of practical problems as linear programs. The second is a geometric intuition for linear programs. The latter helps in interpreting solutions to linear programs and often also leads to useful insights about problems formulated as linear programs. Many other books focus on linear programming algorithms. We discuss such algorithms in our final chapter, but they do not represent the focus of this book. The following sections offer previews of what is to come in subsequent chapters. 1.1 Linear Algebra Linear algebra is about systems of linear equations. It forms a foundation for linear programming, which deals with both linear equations and inequalities. Chapter 2 presents a few concepts from linear algebra that are essential to developments in the remainder of the book. The chapter certainly does not provide comprehensive coverage of important topics in linear algebra; rather, the emphasis is on two geometric interpretations of systems of linear equations. The first associates each linear equation in n unknowns with an (n 1)- dimensional plane consisting of points that satisfy the equation. The set of solutions to a system of m linear equations in n unknowns is then the intersection of m planes. A second perspective views solving a system of linear equations in terms of finding a linear combination of n vectors that generates another desired vector. Though these geometric interpretations of linear systems of equations are simple and intuitive, they are remarkably useful for developing insights about real problems. In Chapter 2, we will illustrate the utility of these concepts through examples involving contingent claims analysis and production. In contingent claims analysis, the few ideas we discuss about linear algebra lead to an understanding of asset replication, market completeness, and state prices. The production example sets the stage for our study of

c Benjamin Van Roy and Kahn Mason 9 linear programming. 1.2 Linear Programs In Chapter 3, we introduce linear programs. In a linear program, a solution is defined by n real numbers called decision variables. The set of feasible solutions is a subset of the n-dimensional space, characterized by linear inequalities. Each linear inequality is satisfied by points in the n-dimensional space that are on one side of an (n 1)-dimensional plane, forming a set called a half space. Given m inequalities, the set of feasible solutions is the intersection of m half-spaces. Such a set is called a polyhedron. A linear program involves optimization (maximization or minimization) of a linear function of the decision variables over the polyhedron of feasible solutions. Let us bring this to life with an example. Example 1.2.1. (Petroleum Production) Crude petroleum extracted from a well contains a complex mixture of component hydrocarbons, each with a different boiling point. A refinery separates these component hydrocarbons using a distillation column. The resulting components are then used to manufacture consumer products such as low, medium, and high octane gasoline, diesel fuel, aviation fuel, and heating oil. Suppose we are managing a company that manufactures n petroleum products and have to decide on the number of liters x j, j = 1,..., n of each product to manufacture next month. Since the amount we produce must be positive, we constrain the decision variables by linear inequalities x j 0, j = 1,..., n. Additional constraints arise from the fact that we have limited resources. We have m types of resources in the form of component hydrocarbons. Let b i, i = 1,..., m, denote the quantity in liters of the ith resource to be available to us next month. Our process for manufacturing the jth product consumes a ij liters of the ith resource. Hence, production of quantities x 1,..., x n, requires n j=1 a ij x j liters of resource i. Because we only have b i liters of resource i available, our solution must satisfy n a ij x j b i. j=1 We define as our objective maximization of next month s profit. Given that the jth product garners c j Dollars per liter, production of quantities x 1,..., x n would generate n j=1 c j x j Dollars in profit. Assembling constraints

10 and the objective, we have the following linear program: maximize subject to nj=1 c j x j nj=1 a ij x j b i, i = 1,..., m x j 0 j = 1,..., n. The above example illustrates the process of formulating a linear program. Linear inequalities characterize the polyhedron of feasible solutions and a linear function defines our objective. Resource constraints limit the objective. 1.3 Duality Associated with every linear program is another called the dual. In order to distinguish a linear program from its dual, the former is called the primal. As an example of a dual linear program, we describe that associated with the petroleum production problem, for which the primal was formulated in Example 1.2.1. Example 1.3.1. (Dual of Petroleum Production) The dual linear program comes from a thought experiment about prices at which we would be willing to sell all our resources. Consider a set of resource prices y 1,..., y m, expressed in terms of Dollars per liter. To sell a resource, we would clearly require that the price is nonnegative: y i 0. Furthermore, to sell a bundle of resources that could be used to manufacture a liter of product j we would require that the proceeds m i=1 y i a ij at least equal the revenue c j that we would receive for the finished product. Aggregating constraints, we have a characterization of the set of prices at which we would be willing to sell our entire stock of resources rather than manufacture products: mi=1 y i a ij c j, j = 1,..., n y i 0 i = 1,..., m. The dual linear program finds our minimum net proceeds among such situations: minimize mi=1 y i b i subject to mi=1 y i a ij c j, j = 1,..., n y i 0 i = 1,..., m. Duality theory studies the relationship between primal and dual linear programs. A central result the duality theorem states that optimal objective values from the primal and dual are equal, if both are finite. In our

c Benjamin Van Roy and Kahn Mason 11 petroleum production problem, this means that, in the worst case among situations that would induce us to sell all our resources rather than manufacture, our proceeds would equal the revenue we could earn from manufacturing. In Chapter 4, we develop duality theory. This theory extends geometric concepts of linear algebra, and like linear algebra, duality leads to useful insights about many problems. For example, duality is central to understanding situations where adversaries compete, as we now discuss. 1.4 Two-Player Zero-Sum Games A two player zero-sum, or competitive, game considers the situation where two agents are dividing a good. Examples include two companies competing over a market of constant size, or games of chess, football, or rock-paperscissors. In many situations the game can be described as follows. Each agent picks an action. If agent 1 picks action i and agent 2 picks action j then agent 1 receives r ij units of benefit and agent 2 receives r ij units of benefit. The game is said to be zero-sum because the benefits received by the two players always sum to zero, since one player receives r ij and the other r ij. Agent 1 wants to maximize r ij, while agent 2 wants to maximize r ij (or, equivalently, minimize r ij ). What makes games like this tricky is that agent 1 must take into account the likely actions of agent 2, who must in turn take into account the likely actions of agent 1. For example, consider a game of rock-paper-scissors, and suppose that agent 1 always chooses rock. Agent 2 would quickly learn this, and choose a strategy that will always beat agent 1, namely, pick paper. Agent 1 will then take this into account, and decide to always pick scissors, and so on. The circularity that appears in our discussion of rock-paper-scissors is associated with the use of deterministic strategies strategies that selects a single action. If the opposing player can learn an agent s strategy, he can design a strategy that will beat it. One way around this problem is to consider randomized strategies. For example, agent 1 could randomly choose between rock, paper, and scissors each time. This would prevent agent 2 from predicting the action of agent 1, and therefore, from designing an effective counter strategy. With randomized strategies, the payoff will still be r ij to agent 1, and r ij to player 2, but now i and j are randomly (and independently) chosen. Agent 1 now tries to maximize the expected value of r ij, while player 2 tries to maximize the expected value of r ij. Example 1.4.1. (Drug running) A South American drug lord is trying to

12 get as many of his shipments across the border as possible. He has a fleet of boats available to him, and each time he sends a boat, he can choose one of three ports at which to unload. He could choose to unload in Miami, Los Angeles, or San Francisco. The USA Coastguard is trying to intercept as many of the drug shipments as possible but only have sufficient resources to cover one port at a time. Moreover, the chance of intercepting a drug shipment differs from port to port. A boat arriving at a port closer to South America will have more fuel with which to evade capture than one arriving further away. The probabilities of interception are given by the following table. Port Probability of interception Miami 1/3 Los Angeles 1/2 San Francisco 3/4 The drug lord considers sending each boat to Miami, but the coastguard realizing this would always choose to cover Miami, and 2/3 of his boats would get through. A better strategy would be to pick a port at random (each one picked with 1/3 probability). Then, the coastguard should cover port 3, since this would maximize the number of shipments captured. In this scenario, 3/4 of the shipments would get through, which is better than 2/3. But is this the best strategy? If we allow for randomized strategies, the problem of finding a strategy that maximizes an agents expected payoff against the best possible counterstrategy can be formulated as a linear program. Interesting, the formulation from the perspective of the first agent is the dual of the formulation from the perspective of the second agent. We will see that duality guarantees that there are a pair of strategies for agents 1 and 2 that put them in equilibrium. In particular, when these strategies are employed, agent 1 can not improve her strategy benefit by learning the strategy of agent 2, and vice-versa. This is also treated in Chapter 4 as a special case of duality. 1.5 Network Flows Network flow problems are a class of optimization problems that arise in many application domains, including the analysis of communication, transportation, and logistic networks. Let us provide an example. Example 1.5.1. (Transportation Problem) Suppose we are selling a single product and have inventories of s 1,..., s m in stock at m different ware-

c Benjamin Van Roy and Kahn Mason 13 house locations. We plan to ship our entire inventory to n customers, in quantities of d 1,..., d n. Shipping from location i to customer j costs c ij Dollars. We need to decide on how to route shipments. Our decision variables x ij represent the number of units to be shipped from location i to customer j. The set of feasible solutions is characterized by three sets of constraints. First, the amount shipped from each location to each customer is not negative: x ij 0. Second, all inventory at location i is shipped: n j=1 x ij = s i. Finally, the number of units sold to customer j are shipped to him: m i=1 x ij = d j. Note that the last two constraints imply that the quantity shipped is equal to that received: m i=1 s i = n j=1 d j. With an objective of minimizing total shipping costs, we arrive at the following linear program: minimize subject to mi=1 nj=1 c ij x ij nj=1 x ij = s i, i = 1,..., m mi=1 x ij = d j, j = 1,..., n x ij 0 i = 1,..., m; j = 1,..., n. The set of feasible solutions to the transportation problem allows for shipment of fractional quantities. This seems problematic if units of the product are indivisible. We don t want to send a customer two halves of a car from two different warehouses! Surprisingly, we never run into this problem because shipping costs in the linear program are guaranteed to be minimized by integral shipments when inventories s 1,..., s m and customer sales d 1,..., d n are integers. The situation generalizes to network flow problems of which the transportation problem is a special case when certain problem parameters are integers, the associated linear program is optimized by an integral solution. This enables use of linear programming a formulation involving continuous decision variables to solve a variety of optimization problems with discrete decision variables. An example is the task-assignment problem of Example 1.0.1. Though the problem is inherently discrete either we assign a task to an employee or we don t it can be solved efficiently via linear programming. In Chapter 5, we discuss a variety of network flow problems and the integrality of optimal solutions. 1.6 Markov Decision Problems Markov decision problems involve systems that evolve in time. In each time period, the system can be in one of a number of states, and based on the state, a decision is made that influences the future evolution of the system. The

14 state of the system in the next time period depends only upon the current state and decision. Markov decision problems arise in a variety of applications, from epidemiology to queueing to the design of interactive robots. Example 1.6.1. (Machine replacement) Suppose we wish to minimize the average cost of running a machine over time. At each time period, the machine can be in one of states 1,..., n where state 1 represents a a perfect condition, and higher-valued states represent greater degrees of deterioration. The cost of running a machine in state i for one time period is c(i). The cost is increasing in i it costs more to run a deteriorated machine. As a machine is used, it deteriorates at a random rate. In each time step, if the state i of the machine is less than n, there is a probability p that the state increases to i + 1. Once in state n, the machine remains in this highly deteriorated state. At each time period, based on the state of the machine, we can decide to replace it. Replacing the machine incurs a cost of C and leaves us with a machine in state 1 in the next time period. We wish to design a decision strategy that minimizes our time-averaged cost. We discuss in Chapter?? how Markov decision problems like the machine replacement problem can be solved using linear programming. 1.7 Linear Programming Algorithms As we have discussed, many important problems can be formulated as linear programs. Another piece of good news is that there are algorithms that can solve linear programs very quickly, even with thousands and sometimes millions of variables and constraints. The history of linear programming algorithms is a long winding road marked with fabulous ideas. Amazingly, over the course of their development over the past five decades, linear programming algorithms have become about a thousand times faster and stateof-the-art computers have become about a thousand times faster together this makes a factor of about one million. A linear program that required one year to solve when the first linear programming algorithms were developed now takes about half a minute! In Chapter??, we discuss some of this history and present some of the main algorithmic ideas in linear programming.