Module 9. Lecture 6. Duality in Assignment Problems

Similar documents
Assortment Optimization under MNL

Difference Equations

COS 521: Advanced Algorithms Game Theory and Linear Programming

The Second Anti-Mathima on Game Theory

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Problem Set 9 Solutions

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

LECTURE 9 CANONICAL CORRELATION ANALYSIS

6.854J / J Advanced Algorithms Fall 2008

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

Fundamental loop-current method using virtual voltage sources technique for special cases

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Lecture 10 Support Vector Machines II

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

MMA and GCMMA two methods for nonlinear optimization

Some modelling aspects for the Matlab implementation of MMA

Economics 101. Lecture 4 - Equilibrium and Efficiency

Kernel Methods and SVMs Extension

Edge Isoperimetric Inequalities

Chapter 13: Multiple Regression

ECE559VV Project Report

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

CS286r Assign One. Answer Key

Polynomial Regression Models

APPENDIX A Some Linear Algebra

Formulas for the Determinant

5 The Rational Canonical Form

Lecture 12: Discrete Laplacian

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

First day August 1, Problems and Solutions

Affine transformations and convexity

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Canonical transformations

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

MAE140 - Linear Circuits - Fall 13 Midterm, October 31

Errors for Linear Systems

2.3 Nilpotent endomorphisms

TRAPEZOIDAL FUZZY NUMBERS FOR THE TRANSPORTATION PROBLEM. Abstract

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

On the Multicriteria Integer Network Flow Problem

Section 8.3 Polar Form of Complex Numbers

( ) 2 ( ) ( ) Problem Set 4 Suggested Solutions. Problem 1

Lecture 4: November 17, Part 1 Single Buffer Management

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Feature Selection: Part 1

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Lecture Notes on Linear Regression

Lecture 21: Numerical methods for pricing American type derivatives

Linear Feature Engineering 11

Calculation of time complexity (3%)

Discontinuous Extraction of a Nonrenewable Resource

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

= z 20 z n. (k 20) + 4 z k = 4

Maximal Margin Classifier

Approximate Smallest Enclosing Balls

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Case A. P k = Ni ( 2L i k 1 ) + (# big cells) 10d 2 P k.

Complete subgraphs in multipartite graphs

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

Solution Thermodynamics

Discussion 11 Summary 11/20/2018

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Chapter 8 Indicator Variables

Graph Reconstruction by Permutations

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Communication Complexity 16:198: February Lecture 4. x ij y ij

A 2D Bounded Linear Program (H,c) 2D Linear Programming

12. The Hamilton-Jacobi Equation Michael Fowler

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

THE SUMMATION NOTATION Ʃ

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Analytical Chemistry Calibration Curve Handout

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

NP-Completeness : Proofs

The Study of Teaching-learning-based Optimization Algorithm

8.6 The Complex Number System

Perfect Competition and the Nash Bargaining Solution

6.3.7 Example with Runga Kutta 4 th order method

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Advanced Quantum Mechanics

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Geometry of Logit and Probit

Subset Topological Spaces and Kakutani s Theorem

Foundations of Arithmetic

Singular Value Decomposition: Theory and Applications

Transcription:

Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept of dualty. Also, ust for anxety, when we studed LP problems or later on TP problems we dd menton ther dual problems and used them effectvely to generate optmal solutons (f exst) for the orgnal prmal problems. But somehow n the entre AP problems soluton procedure we dd not talk about dualty (at least explctly); and ths may sounds a bt strange. Does the AP problem dual has any role to play n arrvng at an optmal soluton (or assgnment) of the prmal AP problem? In fact we wll show that after drawng those horzontal and vertcal lnes to cover the zeros n the Hungaran method, the three steps procedure that we follow to generate the new AP matrx wth more zeros (recall all ths from the prevous lecture) s equvalent to fndng dual varables for the dual of gven AP problem. Let us explan t wth more clarty. Consder a transportaton equvalent of AP problem That s, (P) Mn n =1 =1 c x subect to n =1 x = a = 1; = 1,..., n =1 x = b = 1; = 1,..., n x 0,, = 1,..., n The dual of ths LP problem s as follows (D) Max n =1 u + =1 v subect to u +v c,, = 1,..., n Here, u and v are the dual varables and unrestrcted n sgn. Let (x ) be a feasble assgnment of (P) and (u,v ) be a feasble assgnment of (D). Then for those to be optmal for ther respectve problems we must have the two obectve values equal or n =1 u + n =1 v = n c x. =1 =1 It yelds that =1 =1 c x = =1 =1 (c u v ) x = 0. =1 u ( =1 x ) + =1 v =1 x. But x 0 and c = c u v 0, (, ), therefore we must have (c u v )x = 0, (, ) (1)

2 Observe that the above optmalty condton s a usual complementary slackness optmalty condton. The Hungaran method, dscussed n prevous lecture, essentally fnds feasble solutons (u, v ) and (x ) such that condton (1) gets satsfed. Now recall the steps that we took n the Hungaran method. (Step 1:) In ths step, we fnd the smallest element n each row of AP matrx C and subtract t from all elements of correspondng row to obtan a matrx C (1) (say). Ths step amounts to fndng dual varables u as follows. u = mn (c ), = 1,..., n. 1 n and then c (1) = c u, (, ). (Step 2:) Here, we fnd the smallest element n each column of C (1) and subtract the same from all elements of the correspondng column to get the new matrx say C (2). Ths step s equvalent to fndng the dual varables v as follows v = mn c (1) = mn (c u ) 1 n 1 n Then c (2) = c (1) v = c u v, (, ). Note that v = mn (c u ), thus, 1 n = 1,..., n v c u mplyng c u v 0, (, ). Thus, at the end of step 2 of the Hungaran method we have a feasble soluton (u, v ) of the dual problem (D). Now, when we realze that the mnmum number of horzontal and vertcal lnes drawn to cover zeros n matrx C (2) s strctly less than n (sze of C (2) ), then we do not have an optmal assgnment yet, and subsequently we perform three steps. These three steps essentally update the dual feasble soluton (u, v ) to get a new feasble soluton say (u (1), v (1) ) for (D). If we contnue the entre procedure of the Hungaran method say k tmes then we eventually obtan an updated feasble soluton (u (k), v (k) ) of (D) and f for such a (u (k), v (k) ) condton (1) holds (or n other words complementary slackness holds) then we stop the procedure and the last

updated solutons gve the prmal optmal soluton. 3 Let us see how these (u, v ) are updated through those three steps (urge to recall). Suppose at present we have wth us matrx C (2) and the dual feasble soluton s (u, v ). The updated dual feasble soluton s (u (1), v (1) ) and t s explaned through an example. Suppose, we make assgnment n C (2). Note u 1 =20, u 2 =10, u 3 =12. v 1 =0, v 2 =4, v 3 =0. u (1) = s f th row s uncovered v (1) = -s f th row s covered where s s the value of smallest uncovered element n c (2). If we carefully examne these steps then they result n a matrx say C (3) where c (3) = c(2) - u (1) - v (1) 0. For example, suppose the cost matrx of an assgnment problem s gven as follows. 20 27 30 c= 10 18 16 14 16 12 Then c (1) = and c (2) = 0 7 10 0 8 6 2 4 0 0 3 10 0 4 6 2 0 0 Identfy (by placng ) n rows whch has no assgnment. So here t s row 2.

4 Identfy (by placng ) n columns havng unassgned zeros n marked rows. Identfy all rows havng assgned zeros n marked columns. Draw the lnes through unmarked rows and marked columns. See, we can cover all the zeros n c (2). The number of lnes drawn =2 < 3. (order of c (2) ). So, we choose the smallest uncovered element whch here s 3. So set s=3. Now, update the dual varables as follows. u (1) 1 =3, u(1) 2 =3, u(1) 3 =0.

5 snce frst and second rows are uncovered above. And also v (1) 1 =-3, v(1) 2 =0, v(1) 3 =0. snce only the frst column s covered. Next, evaluate a 3X3 matrx C (3) wth entres c (3) = c (2) - u (1) - v (1) ; = 1, 2, 3 = 1, 2, 3 = 0 0 7 0 1 3 5 0 1. Observe that ths would have been the same matrx have we appled those three steps descrbed n the Hungaran method n lecture 5. Now, f we take a new matrx as C (3) and attempt to make an assgnment we get the followng allocatons. Ths s the case of reachng at optmalty wth an optmal soluton descrbed by X = (x ) = 0 1 0 1 0 0 0 0 1 Note that c (3) x = 0; (, ) and hence X satsfes condton (1) of complementary slackness. Thus X s an optmal assgnment for AP. Consequently Person 1 wll do ob 2 Person 2 s assgned ob 1 Person 3 s assgned ob 3. Through the above dscusson we realze that the dualty theory and n fact the dual varables values updaton s somewhat n buld (though mplctly) n the Hungaran method. The updaton, nvolvng three steps to create a new matrx wth more zeros n the method, s eventually

nvokng the fundamentals of dualty prncples n context of (AP). 6 Though here we skp to analyze and study the convergence analyss of the Hungaran method but take t on face-value that t does converge n a fnte number of steps. Unbalanced AP So, far we have developed a scheme to solve the balanced assgnment problems where the number of obs s equal to number of machnes or n other words the cost matrx for (AP) s a square matrx. But lke n case of TP problem we can talk about unbalanced AP and see how to handle such problems. There are two possble stuatons wth unbalanced AP: one ether the number of obs > number of machnes or other way round. () Suppose number of obs > number of machnes. In ths case, f orgnal cost matrx C s then the number of columns > number of rows. So, we extend the C matrx to become a square matrx by addng addtonal rows n t. Now how to fll the cost values n these addtonal rows? It depends. If we allow certan obs to reman dle (or undone) and we stll mantan 1-1 relaton between obs and machnes, then all the new cost values are taken as zeros. In case we allow some machnes to handle more than one ob then we create these new costs as mnmum cost among all costs n the rows of machnes allowed to do that ob. Let us explan t through a small example. Consder a cost mnmzaton AP problem. If we assume that out of 5 obs 3 wll get completed, one each on one machne, and the remanng two reman undone then we extend the 3X5 matrx c to 5X5 matrx say c 1 as follows.

7 c 1 = 1 2 1 2 3 3 1 2 1 3 1 2 3 3 2 0 0 0 0 0 0 0 0 0 0 Thereafter we smply apply the Hungaran method for optmal assgnment. In the optmal assgnment we shall have exactly one encrcled zero n each row and n each column. Now, we look at the optmal assgnments n the two dummy rows that we have created (.e., fourth and ffth rows). Suppose we fnd that n the fourth row the encrcled zero s say n ffth column (ust a hypothetcal assumpton for purpose of understandng and may not be the actual soluton), then t ndcates that ob 5 remans undone. Smlarly f the optmal assgnment n the ffth row s say n column 2 then ob 2 remans undone. In nut-shell, the optmal assgnments n the addtonal (dummy) rows ndcate whch obs reman undone (or dle). Now, thnk of a scenaro when we want all the fve obs to be completed by ntal 3 machnes. Then we have to all machnes to do more than one ob. Suppose we allow only machne 1 and machne 2 to do more than one task but restrct machne 3 to only one task then we need to extend cost matrx c to defne a 5X5 c 1 matrx as follows. What we have done s that we have created two addtonal rows wth entres c 4 = mn {c 1, c 2 }, = 1, 2,..., 5 c 5 = c 4, = 1,..., 5 These costs are decded by takng mnmum correspondng ob costs of machne 1 and machne 2 and the ffth row costs are repeated as both machne 1 and machne 2 are allowed to do more than one ob so they can do two or perhaps three also. But snce each machne has to do at least one ob, the other machnes can do at most 3 obs. We solve the above balanced (AP) and the optmal assgnments n the dummy rows clearly

ndcate that whch machne wll do whch addtonal ob. 8 Let us do t for the above example. Perform the steps of the Hungaran method (and we are not explanng them here, as they are self explanatory). c (1) = 0 1 0 1 2 2 0 1 0 2 0 1 2 2 1 0 0 0 0 2 0 0 0 0 2 c (2) = 0 1 0 1 1 2 0 1 0 1 0 1 2 2 0 0 0 0 0 1 0 0 0 0 1 Ths problem has alternate assgnments as there has been many te stuatons arbtrarly broken. One such assgnment s shown n red encrcles zeros above. And note ths s an optmal assgnment because number of assgned zeros s equal to the sze of the cost matrx (and t s 5). To read the fnal obs allocatons, we have

9 Here, allocatons of J 1,J 2,J 5 are very obvous. Now for J 3 we fnd that the encrcled zero s n fourth row. The cost c 43 = 1 = mn {c 13, c 23 } = c 13. Thus, ob 3 goes to machne 1 (or M 1 ). On the other hand, J 4 allocaton s n the ffth row and cost c 54 = 1 = mn {c 14, c 24 } = c 24. So, J 4 goes to M 2. In ths way, we can trace back the allocated cell cost value from the orgnal gven machnes cell costs to fgure out whch machne wll fnally do the surplus obs. More complcated stuatons, lke machne can do at most r obs, each machne has to do at least one ob, k obs can be left undone or several such combnatons can be asked for n an (AP) problem wth more obs and less machnes. The fundamental deas reman same and they are to create a balanced (AP) by extendng the matrx to become a square matrx and then fll the new cells wth cost values determned by other specfc requrements descrbed or explaned n the stuaton. The same dscusson can be extended to stuatons when number of obs number of machnes. Here, no ob wll be left undone. However certan machnes have to go dle. Now whch are these machnes can be determned easly by extendng the orgnal cost matrx c=[c ] to a square matrx by creatng dummy obs columns. Then assgn a cost zero to all these new created columns. After that smply apply the Hungaran method. Any optonal allocatons n the dummy obs columns nform us that whch machnes wll reman unutlzed. We encourage the readers to take a small example of ther own (say 5 machnes and 3 obs) and see how the above descrbed procedure works for you n the example. Remarks:() One can also handle stuatons whch demand a specfc ob can not be done on a specfc machne n a balanced AP problem. Ths s smple to handle. Suppose we have a 5X5 balanced AP. (f t s not balanced we can make t a balanced one). Now we put a restrcton

10 that ob 2 (J 2 ) can not be done by machne 3 (say M 3 ) so what we do s that n cell (3,2) we assgn a cost c 32 = M, where M>>0, (.e., M s suffcently large). (2) Another type of AP problem that we can easly work out s a maxmzaton (AP). We generally convert t nto a mnmzaton (AP) by takng the transformaton c = c where c = max (c ) c., That means we fnd out maxmum c value n the entre matrx and subtract each cost value from t to get a matrx C. Note Max AP(C) s same as Mn AP(C ). Then apply the Hungaran method to the new matrx C wthout any change n the optmal assgnment, X = (x ).