Mathematics for Decision Making: An Introduction. Lecture 8

Similar documents
Mathematics for Decision Making: An Introduction. Lecture 13

6. DYNAMIC PROGRAMMING II. sequence alignment Hirschberg's algorithm Bellman-Ford distance vector protocols negative cycles in a digraph

6. DYNAMIC PROGRAMMING II

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 20 Travelling Salesman Problem

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

Data Structures in Java

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Branch-and-Bound for the Travelling Salesman Problem

Lecture 2: Divide and conquer and Dynamic programming

CS Algorithms and Complexity

Notes for Lecture Notes 2

Exact and Approximate Equilibria for Optimal Group Network Formation

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18

The min cost flow problem Course notes for Optimization Spring 2007

Data Structures and Algorithms CMPSC 465

Preliminaries and Complexity Theory

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

NP-Complete Problems. More reductions

Flows. Chapter Circulations

Lecture 4: NP and computational intractability

Travelling Salesman Problem

Chapter 9: Relations Relations

Algorithm Design and Analysis

iretilp : An efficient incremental algorithm for min-period retiming under general delay model

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Scheduling and Optimization Course (MPRI)

Linear Programming Duality

The min cost flow problem Course notes for Search and Optimization Spring 2004

Data Structures and Algorithms (CSCI 340)

Tractable & Intractable Problems

Running Time. Assumption. All capacities are integers between 1 and C.

Algorithms and Complexity theory

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Written Exam Linear and Integer Programming (DM554)

Written Exam Linear and Integer Programming (DM545)

1 Introduction (January 21)

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

NP-Completeness. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

8.3 Hamiltonian Paths and Circuits

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

1 Computational Problems

1 Review of Vertex Cover

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Lecture notes for Analysis of Algorithms : Markov decision processes

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

All-Pairs Shortest Paths

Combinatorial Optimization

Maximum flow problem (part I)

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Algorithms Re-Exam TIN093/DIT600

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

2 GRAPH AND NETWORK OPTIMIZATION. E. Amaldi Introduction to Operations Research Politecnico Milano 1

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

NP-completeness. Chapter 34. Sergey Bereg

NP Complete Problems. COMP 215 Lecture 20

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Bounds on the Traveling Salesman Problem

Problem Complexity Classes

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

16.410/413 Principles of Autonomy and Decision Making

Markov Decision Processes

The max flow problem. Ford-Fulkerson method. A cut. Lemma Corollary Max Flow Min Cut Theorem. Max Flow Min Cut Theorem

Algorithms: COMP3121/3821/9101/9801

Data Structures and Algorithms

A new bidirectional algorithm for shortest paths

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

NP-Complete Problems and Approximation Algorithms

Week 8. 1 LP is easy: the Ellipsoid Method

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

1 Reductions and Expressiveness

Notes for Lecture 21

Exact and Approximate Equilibria for Optimal Group Network Formation

Contents. Introduction

(1,3) (3,4) (2,2) (1,2) (2,4) t

Introduction to Computer Science and Programming for Astronomers

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

Lectures 6, 7 and part of 8

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Classes of Problems. CS 461, Lecture 23. NP-Hard. Today s Outline. We can characterize many problems into three classes:

CS Analysis of Recursive Algorithms and Brute Force

CSC 1700 Analysis of Algorithms: Warshall s and Floyd s algorithms

Introduction to Integer Programming

NP-Completeness. NP-Completeness 1

Two Applications of Maximum Flow

Algorithm Design Strategies V

The Traveling Salesman Problem: An Overview. David P. Williamson, Cornell University Ebay Research January 21, 2014

CS261: Problem Set #3

Price of Stability in Survivable Network Design

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser

Integer Linear Programming (ILP)

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

Network Flows. CTU FEE Department of control engineering. March 28, 2017

Design and Analysis of Algorithms

Transcription:

Mathematics for Decision Making: An Introduction Lecture 8 Matthias Köppe UC Davis, Mathematics January 29, 2009 8 1

Shortest Paths and Feasible Potentials Feasible Potentials Suppose for all v V, there exists some (directed) path P v from r to v of cost y v. Suppose there is one arc (v,w) A with y v + c v,w < y w. Then we know that there is a path P w = ( P v,(v,w) ) that is cheaper than P w (descent step). In particular: If for all v V, we have that y v is the cost of a minimum-cost path P v from r to v, then y v + c v,w y w for all (v,w) A. (1) We call any vector y = (y v ) v V (R {+ }) V a potential; we call it a feasible potential if y r = 0 and (1) holds. Lemma (Feasible potentials provide lower bounds) Let y be a feasible potential and P v be a path from r to v. Then c(p v ) y v. In particular, if c(p v ) = y v, then P v is a minimum cost path (optimality criterion). 8 2

The Algorithm of Ford [1956] We have discovered two ingredients of a descent algorithm: 1 A descent step that moves from one solution to a better solution. 2 An optimality criterion that tells us when to stop. We need one more thing: 3 An initial solution: We can start from a y with y r = 0 and y v = + for all v r (note this is not a feasible potential). We start with a predecessor vector p with p(r) = 0 and p(v) = 1 (to indicate we don t know any path yet) Ford s Algorithm Input: A digraph with arc costs, starting node r Output: Shortest paths from r to all other nodes Initialize y and p; While y is not a feasible potential: Find an incorrect arc (v, w) and correct it, updating predecessor information Reconstruct shortest paths from p. Ford s Algorithm is a prototype of a label-correcting algorithm. Note that, as is, it is underspecified (not strictly an algorithm) because we do not say which incorrect arc should be taken. 8 3

Ford s Algorithm and Negative-Cost Directed Circuits On the second example of the homework, we noticed: Sometimes, Ford s Algorithm does not terminate! Strictly speaking, because of this, it is not an algorithm, i.e., a finite procedure! This is caused by a negative-cost directed circuit in the problem. Along such a circuit, the algorithm will update the potentials y v, so they become smaller and smaller, without bounds. We remark that in this example, for all nodes v on the directed cycle, there does not exist a least-cost directed path from r to v! We can go through the directed cycle an arbitrary number of times, to get directed paths that have arbitrarily low (negative) cost. By changing the input specification and just excluding the case of negative-cost directed circuit, we can make Ford s method always finite, i.e., turn it into an algorithm. We emphasize that this needs to be proved, which we do next. We also prove that it is correct, i.e., when it finishes, it has computed the desired result. 8 4

Finiteness and Correctness of Ford s Method, I We need to prove theorems that establishes properties of the contents of our data structures (here: y and p) during the execution of the algorithm. Proof technique Most algorithms have loops (here: a while loop). Then proofs of the theorems will be inductive proofs, where we show that a certain property holds before we enter the loop, if the property holds at the beginning of one loop iteration, it also holds at the end of this loop iteration. This implies that the property also holds when the loop is left. Lemma (Loop invariants in Ford s Algorithm) If ( G = (V,A),c ) has no negative-cost directed circuit, then, at the beginning and end of any iteration of the while loop in Ford s Algorithm, we have for each v V : 1 If y v <, then it is the cost of a simple directed path (i.e., a directed path where no vertex is visited twice) from r to v. 2 If p(v) 1, then the predecessor vector p recursively defines a simple directed path from r to v of cost at most y v. 8 5

Finiteness and Correctness of Ford s Method, II Theorem (Finiteness and Correctness) If ( G = (V,A),c ) has no negative-cost directed circuit, then Ford s Algorithm terminates after a finite number of iterations. At termination, p defines a least-cost directed path from r to v of cost y v. Proof. Finiteness: By the Loop Invariants Lemma, every y v is the cost of a simple directed path. There are only finitely many simple directed paths, thus only finitely many possible values for each y v. In each correction step, one y v is lowered. Thus only finitely many steps. Correctness: By the Loop Invariants Lemma, for each v V, the predecessor vector p defines a simple directed path P v from r to v of cost at most y v. But the loop has finished, so y is a feasible potential, so by the Lower Bound Lemma, any path from r to v has cost at least y v. So P v is optimal. 8 6

More About Feasible Potentials Theorem (Characterization of networks with feasible potentials) (G, c) has a feasible potential if and only if it has no negative-cost directed circuit. (This holds without assuming there always exists a directed path from a source!) Proof. If G has a feasible potential y, then it does not have a negative-cost directed circuit: Let v 0,v 1,...,v k 1,v k = v 0 be a directed circuit. As in the proof of the Loop Invariants Lemma, adding up c w y i y i 1 along the circuit yields c(p) 0. Now suppose that G does not have a negative-cost directed circuit: Add an artificial source vertex r and zero-cost arcs (r,v), making a new graph G. This graph G satisfies our assumption that there always exists a directed path from the source r to any vertex. G also does not have a negative-cost directed circuit. Apply Ford s Algorithm. By the Finiteness and Correctness Theorem, G has a feasible potential y. This gives a feasible potential y for G. 8 7

Efficiency of Algorithms Now that we know that it terminates (in the case of no negative-cost directed cycles), we can ask how fast (efficient) it is! We could implement it on a computer, and systematically test how long it takes for graphs of various sizes, and, for a given size, how the running time depends on the actual data. (You did this test with a TSP formulation and SCIP as a black box.) However, in the case of shortest paths and Ford s Algorithm, we know the algorithm precisely (it s a transparent box), so we can do better We will make mathematical statements about the running time, and prove them. Because different computers have different speeds, we need a mathematical abstraction of running time or even a mathematical abstraction of a machine. Theoretical Computer Science has a wealth of such abstractions. We start with a very coarse abstraction that is sufficient for analyzing Ford s Algorithm: We just count the iterations of our while loop. 8 8

Theorem (Efficiency of Ford s Algorithm) If c is integer-valued and G = (V,A) has no negative-cost directed circuit, then Ford s Algorithm terminates after at most Cn 2 steps, where n = V and Proof. C = m(c) = 1 + 2 c with c = max{ c (a,b) : (a,b) A}. This is a typical statement of an efficiency result: We establish an upper bound for (some abstraction of) the running time, with a simple formula. Note we do not predict the precise running time; the algorithm could be much faster than that. The first correction step, leading to a finite potential value y v of a vertex, gives y v that is at most c (n 1), because it is the cost of a simple directed path (with at most n 1 arcs!) from the root. In every later correction step, y v gets reduced by at least 1, because c is integral In the end, the potential value y v is the cost of a least-cost path; this may be negative, but we certainly have y v c (n 1). Thus at most 1 + 2 c (n 1) Cn steps per vertex Hence, at most Cn 2 correction steps in total. 8 9

Efficiency of Ford s Algorithm, II The theorem establishes that Ford s Algorithm is a pseudo-polynomial algorithm, i.e., its running time is bounded above by a polynomial expression in the dimensions (such as n k ) and the absolute values of its input data. Because the bound is monotonous in C and n, it is convenient to interpret this bound as an upper bound on the running time of the worst case that can happen among all problems (G,c) with V n and and m(c) C. In a homework exercise, you show that there is a one-parameter family {(G k = (V k,a k ),c k ) : k N} of networks with n k = V k = 2k + 4 vertices and C k = 2 k, such that Ford s Algorithm (with a specific, clever, evil way of choosing which incorrect arc should be corrected) takes more than 2 k steps. This shows that the upper bound is not too far off from the worst case Better efficiency classes: We are not happy with pseudo-polynomial algorithms, because for the same graph G, the running time might grow quickly if we just use large numbers Better are polynomial algorithms, where the running time is allowed to grow polynomially in the dimensions (such as n), but only polynomially in the number of digits needed to write down the data (such as c a,b ) Even better are strongly polynomial algorithms, where the worst-case running time (#steps) is allowed to depend only on the dimensions, not on the data 8 10

Homework 1 1 Currency arbitrages 1 Use Internet resources to learn about the notion of currency arbitrage. 2 In a nutshell, given is a matrix R of currency exchange rates, i.e., for each pair (v,w) of currencies, an exchange rate r v,w is given, i.e., the number of units of currency w that can be paid for with one unit of currency v. Further given is a currency unit a (for instance, USD); we start with 1,000 of the currency unit a and buy and sell currencies using the given exchange rates, until we get currency unit a back. If we make a profit, this is called an arbitrage opportunity. 3 Find a mathematical formulation for the problem of determining whether there exists a currency arbitrage, and finding the maximum possible arbitrage. If your formulation is nonlinear, find a way to make it linear by a change of variable. Create a ZIMPL model; the exchange rate data should be read from a separate data file. 4 From Internet resources or the Wall Street Journal, obtain current exchange rates between USD, EUR, GBP, JPY, and CHF (at your choice, additional currencies). By solving your ZIMPL model with SCIP, determine whether there exists an arbitrage opportunity starting from your 1,000 US dollars, and, if so, determine the maximum possible arbitrage. 5 If you discovered an arbitrage opportunity with these data, explain whether you would be able to make real profit from it. 8 11

Homework 2, 3 2 Modeling with graphs. Back in the GPS Navigation System problem, show how to model complicated street intersections, such as a standard north/south east/west intersection where left turns are not allowed for cars coming from north or south. 3 A problem reduction. Packaged as a black box, you receive from me an algorithm that solves the following problem: Least-cost simple directed path problem Given a directed graph G = (V,A), arc costs c v,w R, which are allowed to be negative, and two vertices r,s V, find a least-cost simple directed path from r to s in G. Show that you can solve an arbitrary symmetric traveling salesman problem by making use of this algorithm. 8 12