Operations Research Lecture 1: Linear Programming Introduction

Similar documents
Operations Research Lecture 2: Linear Programming Simplex Method

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Operations Research Lecture 6: Integer Programming

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Linear Programming, Lecture 4

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Optimality, Duality, Complementarity for Constrained Optimization

MATH2070 Optimisation

Lecture Note 18: Duality

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Linear Programming Duality

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

Lesson 27 Linear Programming; The Simplex Method

Operations Research: Introduction. Concept of a Model

ISE 330 Introduction to Operations Research: Deterministic Models. What is Linear Programming? www-scf.usc.edu/~ise330/2007. August 29, 2007 Lecture 2

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Sensitivity Analysis and Duality in LP

OPERATIONS RESEARCH. Linear Programming Problem

CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Operations Research Lecture 4: Linear Programming Interior Point Method

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

15-780: LinearProgramming

Ω R n is called the constraint set or feasible set. x 1

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

MATH 445/545 Test 1 Spring 2016

AM 121: Intro to Optimization Models and Methods

Formulating and Solving a Linear Programming Model for Product- Mix Linear Problems with n Products

Review of Basic Concepts in Linear Algebra

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

F 1 F 2 Daily Requirement Cost N N N

Chapter 9: Systems of Equations and Inequalities

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Convex Optimization & Lagrange Duality

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming and the Simplex method

Lecture 10: Linear programming duality and sensitivity 0-0

Optimization Methods in Management Science

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Support Vector Machines

IE 5531: Engineering Optimization I

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Second Welfare Theorem

MAT016: Optimization

CSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1

Applications of Linear Programming - Minimization

How to Take the Dual of a Linear Program

Part 1. The Review of Linear Programming Introduction

Chapter 5 Linear Programming (LP)

II. Analysis of Linear Programming Solutions

Linear Programming: Simplex

1 The linear algebra of linear programs (March 15 and 22, 2015)

Linear Programming: Chapter 5 Duality

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lectures 6, 7 and part of 8

Introduction to LP. Types of Linear Programming. There are five common types of decisions in which LP may play a role

Basic Concepts in Linear Algebra

ENG 8801/ Special Topics in Computer Engineering: Pattern Recognition. Memorial University of Newfoundland Pattern Recognition

Convex Optimization and Support Vector Machine

Sensitivity Analysis and Duality

Notes taken by Graham Taylor. January 22, 2005

Math 381 Midterm Practice Problem Solutions

Introduction. Very efficient solution procedure: simplex method.

Chap6 Duality Theory and Sensitivity Analysis

MATH 4211/6211 Optimization Linear Programming

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Equilibrium in Factors Market: Properties

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

An introductory example

Slide 1 Math 1520, Lecture 10

SVM May 2007 DOE-PI Dianne P. O Leary c 2007

Lecture 6 Simplex method for linear programming

Lecture 1 Introduction

Chapter 1: Linear Programming

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

The Consumer, the Firm, and an Economy

ECE 307- Techniques for Engineering Decisions

Exam 3 Review Math 118 Sections 1 and 2

Support Vector Machines. Machine Learning Fall 2017

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming

Summary of the simplex method

ECON2285: Mathematical Economics

Introduction to Operations Research. Linear Programming

UNIVERSITY of LIMERICK

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

MS-E2140. Lecture 1. (course book chapters )

a. Define your variables. b. Construct and fill in a table. c. State the Linear Programming Problem. Do Not Solve.

4.6 Linear Programming duality

Transcription:

Operations Research Lecture 1: Linear Programming Introduction Notes taken by Kaiquan Xu@Business School, Nanjing University 25 Feb 2016 1 Some Real Problems Some problems we may meet in practice or academy: 1.1 Production Planning Given a manufacturer plans to produce two types of products: I and II, the required matireials (A & B) and equipment for producting one product are list in Table 1. The profits for Product I and II are 2$ and 3$. The question is: how to make product planning, so the manufacturer can get the maximum profit. Table 1: Example: Product Planning. Product I Product II Available Resources equipment 1 2 8 matireial A 4 0 16 matireial B 0 4 12 Let x 1 and x 2 be as the numbers of Product I and II to be produced. Under the constraints of the resources, the variables should satisfy the following conditions: x 1 + 2x 2 8 4x 1 16 4x 2 12 The profit z can be represented as z = 2x 1 + 3x 2. This problem can be described as the following math model: 1.2 Load Balancing Problem max z = 2x 1 + 3x 2 s.t. x 1 + 2x 2 8 4x 1 16 4x 2 12 x 1, x 2 0 For n processors with loaded work, distribute the new work such that the lightest-loaded processor has as heavy a load as possible. p i = current load of processor i = 1, 2, n, L = additional total load to be distributed, x i = fraction of additional load L distributed to processor i, with x i 0 and n i=1 x i = 1, 1

τ = minimum of final loads after distribution of workload L. We can formulate this problem as follows: Where e = {1, 1,, 1} T max x,τ s.t. τ τe p + xl e T x = 1 x 0 1.3 Resource Allocation Produce m types of products, by using n resources. Each unit of product i yields c i dollars in revenue, whereas each unit of resource j costs d j dollars. One unit of product i requires A ij units of resource j to manufacture, and a maximum of b j units of resource j are available How to allocate resources to product products to maximize the profit? y i = the number of unites of product i x j = the number of units of resource j consumed We have max x,y s.t. z = c T y d T x x = A T y x b x, y 0 noindent Where the jth equation of x = A T y is x j = A 1j y 1 + A 2j y 2 +.. + A mj y m 1.4 Approximation& Fittting In Economy, Finance, Marketing and other fields, we need to analyze the factors influecing some metrics, or make some predictions. For example, GDP prediction in Figure 1. These are approximation and fitting problems. Figure 1: GDP Prediction 2

Given m data points (a i, b i ), i = 1,, m, where a i R n and b i R, build a model that predicts the value of b from the vector a. A linear model b = a T x is a popular one. We should choose a model that explains the avaiable data as best as possible, i.e. a model that results in small residuals (Figure 2). One possible way is to mimimize the largest residual, that is to minimize max bi a T i x i Figure 2: Approximation& Fitting The following is an equivalent linear programming formulation: here, the decision variables being z and x. An alternative formulation is to adopt the cost criterion The corresponding formulation is min z s.t. b i a T i x z i = 1,, m, b i + a T i x z i = 1,, m, m b i a T i x i=1 min z 1 + + z m s.t. b i a T i x z i i = 1,, m, b i + a T i x z i i = 1,, m, In practice, the quadratic cost criterion m (b i a T i x)2 is often adopted ( least square fit ), which will be discussed in the later chapters. 1.5 Pattern Classification i=1 Credit risk management is a typical Pattern Classification problem. A sample data of credit card application: 3

Figure 3: Credit Card Application In classification problems, given two sets of points in the space of n dimensions R n, find a hyperplane that separate these two set as accurately as possible. Let s see how to use linear programming to find the separating hyperplane. The hyperplane is defined by a vector ω R n and a scalar τ. Ideally, each point t in the first set satisfies ω T t τ, and one in the second set satisfies ω T t τ. To guard against a trivial answer (e.g. ω = 0 and τ 0, the conditions are trivially satisfied), we enforce the stronger conditions ω T t τ + 1 for points in the first set, and ω T t τ 1 for points in the second set. Figure 4: Pattern Classification Let M to be m n matrix, whose ith row contains the n components of the ith points in the first set. Similarly construct k n matrix B from the points in the second set. The violations of the condition ω T t τ + 1 for the point in the first set are measured by a vector y, which is defined by y (Mω τe) + e (here y 0 and e = (1, 1,, 1) T R m ). Similarly, for the points in second set, the violations are measured by z defined by z (Bω τe) + e (z 0, e R k ). The average violation on the first set is e T y/m and on the second set is e T z/k. The classification is formulated as follows: 4

min ω,τ,y,z s.t. e T y/m + e T z/m y (Mω τe) + e z (Bω τe) + e (y, z) 0 Figure 5: Classifier with Linear Programming 2 Linear Programming (LP) 2.1 General Form Given a cost vector c = (c 1,, c n ) T, we seek to minimize (maximize) a linear cost function c T x = n c i x i over all n-dimensional vector x = (x 1,, x n ) T, subject to a set of linear equality and inequality constraints. min c 1 x 1 + c 2 x 2 + + c n x n s.t. a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2 a m11x 1 + a m12x 2 + + a m1nx n b m1 i=1 a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2 a m21x 1 + a m22x 2 + + a m2nx n b m 2 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m31x 1 + a m32x 2 + + a m3nx n = b m 3 x 1, x 2,, x n ( )0 5

min c T x s.t. a T i x b i i M 1 a T i x b i i M 2 a T i x = b i i M 3 x j 0 j N 1 x j 0 j N 2 The variable x 1,, x n are called decision variables, a vector x satisfying all of the constraints is called a feasible solution. c t x is called the objective function, a feasible solusion x that minimizes the objective function (that is, c T x c T x for all feasible x) is called an optimal solution. If we can find a feasible solution x whose cost is less than any real number, we say that the optimal cost is (unbounded below, the problem is unbounded). 2.2 Reduction to standard form The standard form of a linear programming(lp) problem is min c 1 x 1 + c 2 x 2 + + c n x n s.t. a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m x 1, x 2,, x n 0 A general linear programming problem can be transformed into an equivalent problem in standard form: a) Elimination of free variables: Given an unrestricted variable x j, replace it by x + j x j, where x+ j 0, x j 0 b) Elimination of inequality constraints: Given an inequality constraint of the form n a ij x j ( )b i, introduce a new variable s i and convert the constrain as j=1 here, s i is called as a slack variable n a ij x j + s i ( s i ) = b i ; s i 0 j=1 Example 1. The linear programming problem can be converted into the stardard form min 2x 1 + 4x 2 s.t. x 1 + x 2 3 3x 1 + 2x 2 = 14 x 1 0 min 2x 1 + 4x + 2 4x 2 s.t. x 1 + x + 2 x 2 x 3 = 3 3x 1 + 2x + 2 2x 2 = 14 x 1, x + 2, x 2, x 3 0 6

3 References 1. Dimitris Bertsimas, John N. Tsitsiklis. Introduction to Linear Programming, Athena Scientific, 1997.02 2. Michael C. Ferris, Olvi L. Mangasarian, Stephen J. Wright. Linear Programming with Matlab, SIAM, 2007 7