Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles

Size: px
Start display at page:

Download "Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles"

Transcription

1 Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles Thomas Dueholm Hansen 1 Haim Kaplan Uri Zwick 1 Department of Management Science and Engineering, Stanford University, USA. School of Computer Science, Tel Aviv University, Israel. May 1, 014 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 1/1

2 The simplex algorithm, Dantzig (1947) max c T x s.t. Ax = b x 0 c Linear programming: Optimize a linear objective function subject to linear constraints. Vertices (or corners) are basic feasible solutions. The simplex algorithm: Move from vertex to vertex along edges while improving the objective. This operation is called pivoting. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles /1

3 Pivoting rules c Several improving pivots may be available for a given basic feasible solution. The edge is then chosen by a pivoting rule. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles /1

4 Pivoting rules c Several improving pivots may be available for a given basic feasible solution. The edge is then chosen by a pivoting rule. Dantzig s pivoting rule: Repeatedly use the improving pivot with most negative reduced cost. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles /1

5 Dantzig s pivoting rule Klee and Minty (197): Dantzig s pivoting rule may require exponentially many steps (the Klee-Minty cube 1 ). 1 Picture from Gärtner, Henk and Ziegler (1998) Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 4/1

6 Dantzig s pivoting rule Klee and Minty (197): Dantzig s pivoting rule may require exponentially many steps (the Klee-Minty cube 1 ). Although Dantzig s rule is exponential in the worst case, it is often efficient in practise. In this work we study Dantzig s rule when used to solve: Single source shortest paths Discounted deterministic Markov decision processes 1 Picture from Gärtner, Henk and Ziegler (1998) Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 4/1

7 Example: Single target shortest paths b 1 = 1 b = 1 b = 1 b t = t b 4 = 1 b 5 = 1 b 6 = 1 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 5/1

8 Example: Single target shortest paths b 1 = 1 b = 1 b = 1 b t = t b 4 = 1 b 5 = 1 b 6 = 1 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 5/1

9 Example: Single target shortest paths b 1 = 1 b = 1 b = 1 b t = t b 4 = 1 b 5 = 1 b 6 = 1 minimize c u,v x u,v s.t. v V : (u,v) E x v,w x u,v = b v w:(v,w) E u:(u,v) E (u, v) E : x u,v 0 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 5/1

10 Single target shortest paths The constraints ensure flow conservation. For a basic feasible solution, exactly one edge leaving every vertex has non-zero flow. There is a one-to-one correspondence between basic feasible solutions and shortest paths trees (or policies). Flow conservation: x 1 = 7 x = 0 x = 4 x 4 = x 1 + x = 1 + x + x 4 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 6/1

11 Single target shortest paths The constraints ensure flow conservation. For a basic feasible solution, exactly one edge leaving every vertex has non-zero flow. There is a one-to-one correspondence between basic feasible solutions and shortest paths trees (or policies). A pivot directs the flow along a different edge. An edge is an improving pivot (or improving switch) w.r.t. a policy iff it shortens the paths to the target. Flow conservation: x 1 = 7 x = 0 x = 4 x 4 = x 1 + x = 1 + x + x 4 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 6/1

12 Reduced costs c π 1, = = 1 8 t val π (1) = 1 val π () = 11 val π () = 8 For every policy π (shortest paths tree), let valπ(v) be the length of the path from v to t in π: (u, v) π : valπ(u) = c u,v + valπ(v) Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 7/1

13 Reduced costs c π 1, = = 1 8 val π (1) = 1 val π () = 11 val π () = 8 t For every policy π (shortest paths tree), let valπ(v) be the length of the path from v to t in π: (u, v) π : valπ(u) = c u,v + valπ(v) The reduced cost of an edge (u, v) w.r.t. π is: c π u,v := c u,v + valπ(v) valπ(u) Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 7/1

14 Reduced costs c π 1, = = 1 8 val π (1) = 1 val π () = 11 val π () = 8 t For every policy π (shortest paths tree), let valπ(v) be the length of the path from v to t in π: (u, v) π : valπ(u) = c u,v + valπ(v) The reduced cost of an edge (u, v) w.r.t. π is: c π u,v := c u,v + valπ(v) valπ(u) (u, v) is an improving switch w.r.t. π iff c π u,v < 0. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 7/1

15 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

16 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

17 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

18 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

19 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4, Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

20 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4,, 8 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

21 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4,, 8, 7 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

22 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4,, 8, 7, 8 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

23 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4,, 8, 7, 8, 7 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

24 Deterministic Markov decision processes (DMDPs) No target. Instead, we generate an infinite path and want to minimize the observed costs. Observed costs: 5, 4,, 8, 7, 8, 7, 8, 7, 8, 7,... Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 8/1

25 Deterministic Markov decision processes (DMDPs) The sum of the costs may diverge to + or. Instead we minimize the discounted sum of costs, using some discount factor γ < 1. We can also use varying discounts, in which case every edge (u, v) has its own discount factor γ u,v. Observed costs: c 0, c 1, c, c, c 4,... Discounted sum: c 0 + γc 1 + γ c + γ c +... = γ k c k Varying discounts: c 0 + γ 0 c 1 + γ 0 γ 1 c + γ 0 γ 1 γ c +... = k 1 k=0 j=0 γ j k=0 c k Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 9/1

26 Related work and results The simplex algorithm with Dantzig s rule is a natural algorithm for solving the single source shortest paths (SSSP) problem, however, its complexity is not well understood. Orlin (1985): O(mn log n) pivots for SSSP with n vertices and m edges. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 10/1

27 Related work and results The simplex algorithm with Dantzig s rule is a natural algorithm for solving the single source shortest paths (SSSP) problem, however, its complexity is not well understood. Orlin (1985): O(mn log n) pivots for SSSP with n vertices and m edges. The same bound can be obtained with the analysis of Post and Ye (01). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 10/1

28 Related work and results The simplex algorithm with Dantzig s rule is a natural algorithm for solving the single source shortest paths (SSSP) problem, however, its complexity is not well understood. Orlin (1985): O(mn log n) pivots for SSSP with n vertices and m edges. The same bound can be obtained with the analysis of Post and Ye (01). We show: O(mn log n) upper bound, and Ω(n ) lower bound, even for graphs with m = Θ(n). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 10/1

29 Related work and results The simplex algorithm with Dantzig s rule is a natural algorithm for solving the single source shortest paths (SSSP) problem, however, its complexity is not well understood. Orlin (1985): O(mn log n) pivots for SSSP with n vertices and m edges. The same bound can be obtained with the analysis of Post and Ye (01). We show: O(mn log n) upper bound, and Ω(n ) lower bound, even for graphs with m = Θ(n). Every iteration uses O(m) time, so these bounds can not compete with the O(mn) Bellman-Ford algorithm. However, Dantzig s rule is a much more general algorithm. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 10/1

30 Related work and results Bounds for the number of pivots performed by the simplex algorithm with Dantzig s rule when applied to deterministic Markov decision processes (MDPs) with n vertices (states) and m edges (actions): Post and Ye (01): O(m n log n) for uniform discounts. O(m n 5 log n) for varying discounts. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 11/1

31 Related work and results Bounds for the number of pivots performed by the simplex algorithm with Dantzig s rule when applied to deterministic Markov decision processes (MDPs) with n vertices (states) and m edges (actions): Post and Ye (01): O(m n log n) for uniform discounts. O(m n 5 log n) for varying discounts. We show: O(m n log n) for uniform discounts. O(m n 4 log n) for varying discounts, assuming that all discounts are at least 1 1/Ω(n ). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 11/1

32 Related work and results Bounds for the number of pivots performed by the simplex algorithm with Dantzig s rule when applied to deterministic Markov decision processes (MDPs) with n vertices (states) and m edges (actions): Post and Ye (01): O(m n log n) for uniform discounts. O(m n 5 log n) for varying discounts. We show: O(m n log n) for uniform discounts. O(m n 4 log n) for varying discounts, assuming that all discounts are at least 1 1/Ω(n ). Scherrer (01) generalized the result by Post and Ye (01) by identifying the properties that were needed for the proof to work. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 11/1

33 Related work and results We also show that deterministic MDPs with varying discounts (tending to 1) can model the minimum cost to time ratio cycle problem. The O(m n 4 log n) strongly polynomial bound for Dantzig s rule also applies to this setting. The only other known strongly polynomial algorithm runs in time Õ(n ) and uses Megiddo s parametric search technique (198). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 1/1

34 Minimum cost-to-time ratio cycles c 1, t 1 c, t c, t c 7, t 7 c 5, t 5 c 9, t 9 c 4, t 4 c 6, t 6 c 1, t 1 c 8, t 8 c 14, t 14 c 11, t 11 c 1, t 1 c 10, t 10 Find the cycle C that minimizes the cost-to-time ratio, ( (u,v) C c u,v )/( (u,v) C t u,v ). When t u,v = 1 for all edges (u, v) we are looking for the minimum mean cost cycle. This problem is, for instance, solved as a subroutine in the min-cost flow algorithm of Goldberg and Tarjan (1989). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 1/1

35 Important observation Dantzig s rule is oblivious to a potential transformation: Let p v be the potential of vertex v, and define new costs by: (u, v) E : c u,v := c u,v + p v p u Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 14/1

36 Important observation Dantzig s rule is oblivious to a potential transformation: Let p v be the potential of vertex v, and define new costs by: (u, v) E : c u,v := c u,v + p v p u The length of any path v 0, v 1, v,..., v k is changed by (p v1 p v0 ) + (p v p v1 ) + + (p vk p vk 1 ) = p vk p v0 Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 14/1

37 Important observation Dantzig s rule is oblivious to a potential transformation: Let p v be the potential of vertex v, and define new costs by: (u, v) E : c u,v := c u,v + p v p u The length of any path v 0, v 1, v,..., v k is changed by (p v1 p v0 ) + (p v p v1 ) + + (p vk p vk 1 ) = p vk p v0 The reduced costs remain the same after the transformation: The reduced cost c π u,v := c u,v + valπ(v) valπ(u) is the difference in length between two paths both starting at u and ending at t. The lengths of the two paths are changed by the same amount, and hence the difference remains the same. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 14/1

38 Simplifying assumptions For the analysis, we may transform the costs using the values of any policy π as potentials: (u, v) E : c u,v := c u,v + val π (v) val π (u) The transformed costs are exactly the reduced costs of π. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 15/1

39 Simplifying assumptions For the analysis, we may transform the costs using the values of any policy π as potentials: (u, v) E : c u,v := c u,v + val π (v) val π (u) The transformed costs are exactly the reduced costs of π. Assumption 1: Every edge (u, v) π has reduced cost 0 w.r.t. π. Hence, every vertex has an outgoing zero-cost edge, and these edges form a tree leading to the target t. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 15/1

40 Simplifying assumptions For the analysis, we may transform the costs using the values of any policy π as potentials: (u, v) E : c u,v := c u,v + val π (v) val π (u) The transformed costs are exactly the reduced costs of π. Assumption 1: Every edge (u, v) π has reduced cost 0 w.r.t. π. Hence, every vertex has an outgoing zero-cost edge, and these edges form a tree leading to the target t. Assumption : If we use the final policy π generated by Dantzig s rule, then all its values are 0. Since the values decrease with every iteration, we may assume that all values are non-negative. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 15/1

41 Post and Ye (01) Lemma (Post and Ye (01)) Every O(n log n) iterations an edge is eliminated such that it does not appear in later policies. Theorem (Orlin (1985), Post and Ye (01)) Dantzig s rule terminates after at most O(mn log n) iterations for single source shortest paths. The eliminated edge is the edge with most positive (transformed) cost. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 16/1

42 Post and Ye (01) Lemma (Post and Ye (01)) Every O(n log n) iterations an edge is eliminated such that it does not appear in later policies. Theorem (Orlin (1985), Post and Ye (01)) Dantzig s rule terminates after at most O(mn log n) iterations for single source shortest paths. The eliminated edge is the edge with most positive (transformed) cost. Elimination criterion: Since all values are non-negative, an edge (u, v) is eliminated once the value of u becomes sufficiently small: (u, v) π j : val πj (u) = c u,v + val πj (v) c u,v. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 16/1

43 Convergence Lemma Under assumptions 1 and, suppose π i+1 is obtained from π i by performing the improving switch with most negative reduced cost. Then: ( val πi+1 (v) 1 1 ) n val πi (v) v V v V A corresponding lemma was shown by Orlin (1985) and by Post and Ye (01). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 17/1

44 Convergence Lemma Under assumptions 1 and, suppose π i+1 is obtained from π i by performing the improving switch with most negative reduced cost. Then: ( val πi+1 (v) 1 1 ) n val πi (v) v V v V A corresponding lemma was shown by Orlin (1985) and by Post and Ye (01). Post and Ye (01) use the lemma to bound the number of iterations until a single edge is eliminated. We create a tradeoff: Either multiple edges are eliminated, or the convergence is faster. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 17/1

45 The benefit of few edges with large cost t 0 0 c 0 0 c A policy π is a tree rooted at the target t. If the cost of an edge (u, v) π is almost zero then valπ(u) valπ(v). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 18/1

46 Stronger lemma Lemma Under assumptions 1 and, suppose π i+1 is obtained from π i by performing the improving switch with most negative reduced cost. Assume that the vertices can be partitioned into k sets, such that all vertices in the same set have almost the same value. Then: ( val πi+1 (v) 1 1 ) val πi (v) kn v V v V We use this lemma to show that every O(kn log n) iterations, k edges are eliminated. Thus, the total number of iterations is at most O(mn log n). Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 19/1

47 Stronger lemma Lemma Under assumptions 1 and, suppose π i+1 is obtained from π i by performing the improving switch with most negative reduced cost. Assume that the vertices can be partitioned into k sets, such that all vertices in the same set have almost the same value. Then: ( val πi+1 (v) 1 1 ) val πi (v) kn v V v V We use this lemma to show that every O(kn log n) iterations, k edges are eliminated. Thus, the total number of iterations is at most O(mn log n). Note: The number of large-cost edges in the current policy varies. The analysis is restarted when this number doubles. Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 19/1

48 Open problems Close the gap between the O(mn log n) and Ω(n ) bounds for the number of pivots performed by Dantzig s rule for single source shortest paths. Improve the O(m n log n) and O(m n 4 log n) bounds for Dantzig s rule for deterministic MDPs with uniform and varying discounts, respectively. Prove a strongly polynomial bound for Howard s algorithm for deterministic MDPs? This algorithm simultaneously performs the improving switch with most negative reduced cost at every vertex. Hansen and Zwick (010) conjectured that the number of iterations should be at most m. Can the minimum cost to time ratio cycle problem be solved in time O(mn), improving the Õ(n ) algorithm of Megiddo (198)? Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 0/1

49 The end Thank you for listening! Hansen, Kaplan, and Zwick Dantzig s rule for SSSP, DMDPs, and min-ratio-cycles 1/1

A subexponential lower bound for the Random Facet algorithm for Parity Games

A subexponential lower bound for the Random Facet algorithm for Parity Games A subexponential lower bound for the Random Facet algorithm for Parity Games Oliver Friedmann 1 Thomas Dueholm Hansen 2 Uri Zwick 3 1 Department of Computer Science, University of Munich, Germany. 2 Center

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount Yinyu Ye Department of Management Science and Engineering and Institute of Computational

More information

Algorithms and Theory of Computation. Lecture 11: Network Flow

Algorithms and Theory of Computation. Lecture 11: Network Flow Algorithms and Theory of Computation Lecture 11: Network Flow Xiaohui Bei MAS 714 September 18, 2018 Nanyang Technological University MAS 714 September 18, 2018 1 / 26 Flow Network A flow network is a

More information

On the number of distinct solutions generated by the simplex method for LP

On the number of distinct solutions generated by the simplex method for LP Retrospective Workshop Fields Institute Toronto, Ontario, Canada On the number of distinct solutions generated by the simplex method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology

More information

On the Number of Solutions Generated by the Simplex Method for LP

On the Number of Solutions Generated by the Simplex Method for LP Workshop 1 on Large Scale Conic Optimization IMS (NUS) On the Number of Solutions Generated by the Simplex Method for LP Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology November 19 23,

More information

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE

More information

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014 ORIE 6300 Mathematical Programming I October 9, 2014 Lecturer: David P. Williamson Lecture 14 Scribe: Calvin Wylie 1 Simplex Issues: Number of Pivots Question: How many pivots does the simplex algorithm

More information

The Complexity of the Simplex Method

The Complexity of the Simplex Method The Complexity of the Simplex Method John Fearnley University of Liverpool Liverpool United Kingdom john.fearnley@liverpool.ac.uk Rahul Savani University of Liverpool Liverpool United Kingdom rahul.savani@liverpool.ac.uk

More information

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method ICOTA8, SHANGHAI, CHINA A Bound for the Number of Different Basic Solutions Generated by the Simplex Method Tomonari Kitahara and Shinji Mizuno Tokyo Institute of Technology December 12th, 2010 Contents

More information

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Properties of a Simple Variant of Klee-Minty s LP and Their Proof Properties of a Simple Variant of Klee-Minty s LP and Their Proof Tomonari Kitahara and Shinji Mizuno December 28, 2 Abstract Kitahara and Mizuno (2) presents a simple variant of Klee- Minty s LP, which

More information

On the reduction of total cost and average cost MDPs to discounted MDPs

On the reduction of total cost and average cost MDPs to discounted MDPs On the reduction of total cost and average cost MDPs to discounted MDPs Jefferson Huang School of Operations Research and Information Engineering Cornell University July 12, 2017 INFORMS Applied Probability

More information

Approximate Binary Search Algorithms for Mean Cuts and Cycles

Approximate Binary Search Algorithms for Mean Cuts and Cycles Approximate Binary Search Algorithms for Mean Cuts and Cycles S. Thomas McCormick Faculty of Commerce and Business Administration University of British Columbia Vancouver, BC V6T 1Z2 Canada June 1992,

More information

Computational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes

Computational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes Computational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes Jefferson Huang Dept. Applied Mathematics and Statistics Stony Brook

More information

Introduction to Algorithms

Introduction to Algorithms Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 18 Prof. Erik Demaine Negative-weight cycles Recall: If a graph G = (V, E) contains a negativeweight cycle, then some shortest paths may not exist.

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

The Simplex Algorithm: Technicalities 1

The Simplex Algorithm: Technicalities 1 1/45 The Simplex Algorithm: Technicalities 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/45 Two Issues Here we discuss two potential problems with the

More information

Stochastic Shortest Path Problems

Stochastic Shortest Path Problems Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Lecture 5: Computational Complexity

Lecture 5: Computational Complexity Lecture 5: Computational Complexity (3 units) Outline Computational complexity Decision problem, Classes N P and P. Polynomial reduction and Class N PC P = N P or P = N P? 1 / 22 The Goal of Computational

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

VIII. NP-completeness

VIII. NP-completeness VIII. NP-completeness 1 / 15 NP-Completeness Overview 1. Introduction 2. P and NP 3. NP-complete (NPC): formal definition 4. How to prove a problem is NPC 5. How to solve a NPC problem: approximate algorithms

More information

CSC2420: Algorithm Design, Analysis and Theory Spring (or Winter for pessimists) 2017

CSC2420: Algorithm Design, Analysis and Theory Spring (or Winter for pessimists) 2017 CSC2420: Algorithm Design, Analysis and Theory Spring (or Winter for pessimists) 2017 Allan Borodin January 30, 2017 1 / 32 Lecture 4 Announcements: I have posted all 7 questions for assignment 1. It is

More information

The Budget-Constrained Maximum Flow Problem

The Budget-Constrained Maximum Flow Problem 9 The Budget-Constrained Maximum Flow Problem In this chapter we consider the following problem which is called the constrained maximum flow problem ( Cmfp) [AO95]: We are given a budget B and we seek

More information

The quest for finding Hamiltonian cycles

The quest for finding Hamiltonian cycles The quest for finding Hamiltonian cycles Giang Nguyen School of Mathematical Sciences University of Adelaide Travelling Salesman Problem Given a list of cities and distances between cities, what is the

More information

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

Running Time. Assumption. All capacities are integers between 1 and C.

Running Time. Assumption. All capacities are integers between 1 and C. Running Time Assumption. All capacities are integers between and. Invariant. Every flow value f(e) and every residual capacities c f (e) remains an integer throughout the algorithm. Theorem. The algorithm

More information

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form 4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time

Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear Sometimes Sublinear Run Time Mengdi Wang Department of Operations Research and Financial Engineering, Princeton

More information

Complexity of linear programming: outline

Complexity of linear programming: outline Complexity of linear programming: outline I Assessing computational e ciency of algorithms I Computational e ciency of the Simplex method I Ellipsoid algorithm for LP and its computational e ciency IOE

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

On the Exponent of the All Pairs Shortest Path Problem

On the Exponent of the All Pairs Shortest Path Problem On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty

More information

Agenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming

Agenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Now Flow Networks, Max-flow Min-cut and Applications c Hung Q. Ngo (SUNY at Buffalo) CSE 531 Algorithm Analysis and Design 1 / 52

More information

u = 50 u = 30 Generalized Maximum Flow Problem s u = 00 2 u = 00 u = 50 4 = 3=4 u = 00 t capacity u = 20 3 u = 20 = =2 5 u = 00 gain/loss factor Max o

u = 50 u = 30 Generalized Maximum Flow Problem s u = 00 2 u = 00 u = 50 4 = 3=4 u = 00 t capacity u = 20 3 u = 20 = =2 5 u = 00 gain/loss factor Max o Generalized Max Flows Kevin Wayne Cornell University www.orie.cornell.edu/~wayne = 3=4 40 v w 30 advisor: Eva Tardos u = 50 u = 30 Generalized Maximum Flow Problem s u = 00 2 u = 00 u = 50 4 = 3=4 u =

More information

A Deterministic Almost-Tight Distributed Algorithm for Approximating Single-Source Shortest Paths

A Deterministic Almost-Tight Distributed Algorithm for Approximating Single-Source Shortest Paths A Deterministic Almost-Tight Distributed Algorithm for Approximating Single-Source Shortest Paths Monika Henzinger 1 Sebastian Krinninger 2 Danupon Nanongkai 3 1 University of Vienna 2 Max Planck Institute

More information

Lecture 21 November 11, 2014

Lecture 21 November 11, 2014 CS 224: Advanced Algorithms Fall 2-14 Prof. Jelani Nelson Lecture 21 November 11, 2014 Scribe: Nithin Tumma 1 Overview In the previous lecture we finished covering the multiplicative weights method and

More information

Introduction to integer programming II

Introduction to integer programming II Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Flows. Chapter Circulations

Flows. Chapter Circulations Chapter 4 Flows For a directed graph D = (V,A), we define δ + (U) := {(u,v) A : u U,v / U} as the arcs leaving U and δ (U) := {(u,v) A u / U,v U} as the arcs entering U. 4. Circulations In a directed graph

More information

Markov decision processes and interval Markov chains: exploiting the connection

Markov decision processes and interval Markov chains: exploiting the connection Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

22 Max-Flow Algorithms

22 Max-Flow Algorithms A process cannot be understood by stopping it. Understanding must move with the flow of the process, must join it and flow with it. The First Law of Mentat, in Frank Herbert s Dune (965) There s a difference

More information

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara

More information

Graph Theory and Optimization Computational Complexity (in brief)

Graph Theory and Optimization Computational Complexity (in brief) Graph Theory and Optimization Computational Complexity (in brief) Nicolas Nisse Inria, France Univ. Nice Sophia Antipolis, CNRS, I3S, UMR 7271, Sophia Antipolis, France September 2015 N. Nisse Graph Theory

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size. 10 2.2. CLASSES OF COMPUTATIONAL COMPLEXITY An optimization problem is defined as a class of similar problems with different input parameters. Each individual case with fixed parameter values is called

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

Markov decision processes

Markov decision processes CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only

More information

Mathematics for Decision Making: An Introduction. Lecture 13

Mathematics for Decision Making: An Introduction. Lecture 13 Mathematics for Decision Making: An Introduction Lecture 13 Matthias Köppe UC Davis, Mathematics February 17, 2009 13 1 Reminder: Flows in networks General structure: Flows in networks In general, consider

More information

Average Case Analysis. October 11, 2011

Average Case Analysis. October 11, 2011 Average Case Analysis October 11, 2011 Worst-case analysis Worst-case analysis gives an upper bound for the running time of a single execution of an algorithm with a worst-case input and worst-case random

More information

Week 8. 1 LP is easy: the Ellipsoid Method

Week 8. 1 LP is easy: the Ellipsoid Method Week 8 1 LP is easy: the Ellipsoid Method In 1979 Khachyan proved that LP is solvable in polynomial time by a method of shrinking ellipsoids. The running time is polynomial in the number of variables n,

More information

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

CSC 8301 Design & Analysis of Algorithms: Lower Bounds CSC 8301 Design & Analysis of Algorithms: Lower Bounds Professor Henry Carter Fall 2016 Recap Iterative improvement algorithms take a feasible solution and iteratively improve it until optimized Simplex

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

On the Policy Iteration algorithm for PageRank Optimization

On the Policy Iteration algorithm for PageRank Optimization Université Catholique de Louvain École Polytechnique de Louvain Pôle d Ingénierie Mathématique (INMA) and Massachusett s Institute of Technology Laboratory for Information and Decision Systems Master s

More information

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit

Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end wit Chapter 0 Introduction Suppose this was the abstract of a journal paper rather than the introduction to a dissertation. Then it would probably end with some cryptic AMS subject classications and a few

More information

Breadth-First Search of Graphs

Breadth-First Search of Graphs Breadth-First Search of Graphs Analysis of Algorithms Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Applications of Breadth-First Search of Graphs a) Single Source

More information

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 6.8 Shortest Paths Shortest Paths Shortest path problem. Given a directed graph G = (V,

More information

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12 Today s class Constrained optimization Linear programming 1 Midterm Exam 1 Count: 26 Average: 73.2 Median: 72.5 Maximum: 100.0 Minimum: 45.0 Standard Deviation: 17.13 Numerical Methods Fall 2011 2 Optimization

More information

Partitioning Metric Spaces

Partitioning Metric Spaces Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Supporting hyperplanes

Supporting hyperplanes Supporting hyperplanes General approach when using Lagrangian methods Lecture 1 homework Shadow prices A linear programming problem The simplex tableau Simple example with cycling The pivot rule being

More information

CSC2420: Algorithm Design, Analysis and Theory Fall 2017

CSC2420: Algorithm Design, Analysis and Theory Fall 2017 CSC2420: Algorithm Design, Analysis and Theory Fall 2017 Allan Borodin and Nisarg Shah October 11, 2017 1 / 32 Lecture 5 Announcements: The first assignment is due next week, October 18, at 1:00 PM The

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones. Jefferson Huang

Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones. Jefferson Huang Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones Jefferson Huang School of Operations Research and Information Engineering Cornell University November 16, 2016

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LETURE 2 Network Flow Finish bipartite matching apacity-scaling algorithm Adam Smith 0//0 A. Smith; based on slides by E. Demaine,. Leiserson, S. Raskhodnikova, K. Wayne Marriage

More information

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows Matching Input: undirected graph G = (V, E). M E is a matching if each node appears in at most one Part V edge in M. Maximum Matching: find a matching of maximum cardinality Matchings Ernst Mayr, Harald

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Worst Case and Average Case Behavior of the Simplex Algorithm

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Worst Case and Average Case Behavior of the Simplex Algorithm Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebrasa-Lincoln Lincoln, NE 68588-030 http://www.math.unl.edu Voice: 402-472-373 Fax: 402-472-8466 Topics in Probability Theory and

More information

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions Reduction Reductions Problem X reduces to problem Y if given a subroutine for Y, can solve X. Cost of solving X = cost of solving Y + cost of reduction. May call subroutine for Y more than once. Ex: X

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Integer Linear Programming (ILP)

Integer Linear Programming (ILP) Integer Linear Programming (ILP) Zdeněk Hanzálek, Přemysl Šůcha hanzalek@fel.cvut.cz CTU in Prague March 8, 2017 Z. Hanzálek (CTU) Integer Linear Programming (ILP) March 8, 2017 1 / 43 Table of contents

More information

Faster Pseudopolynomial Algorithms for Mean-Payoff Games

Faster Pseudopolynomial Algorithms for Mean-Payoff Games Faster Pseudopolynomial Algorithms for Mean-Payoff Games 1 Faster Pseudopolynomial Algorithms for Mean-Payoff Games L. Doyen, R. Gentilini, and J.-F. Raskin Univ. Libre de Bruxelles Faster Pseudopolynomial

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 31, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 31, 2017 U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 3 Luca Trevisan August 3, 207 Scribed by Keyhan Vakil Lecture 3 In which we complete the study of Independent Set and Max Cut in G n,p random graphs.

More information

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs

Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Haim Kaplan Tel-Aviv University, Israel haimk@post.tau.ac.il Nira Shafrir Tel-Aviv University, Israel shafrirn@post.tau.ac.il

More information

Linearly-Solvable Stochastic Optimal Control Problems

Linearly-Solvable Stochastic Optimal Control Problems Linearly-Solvable Stochastic Optimal Control Problems Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter 2014

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

CS Algorithms and Complexity

CS Algorithms and Complexity CS 50 - Algorithms and Complexity Linear Programming, the Simplex Method, and Hard Problems Sean Anderson 2/15/18 Portland State University Table of contents 1. The Simplex Method 2. The Graph Problem

More information

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35. NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979. NP-Completeness 1 General Problems, Input Size and

More information

An Introduction to Markov Decision Processes. MDP Tutorial - 1

An Introduction to Markov Decision Processes. MDP Tutorial - 1 An Introduction to Markov Decision Processes Bob Givan Purdue University Ron Parr Duke University MDP Tutorial - 1 Outline Markov Decision Processes defined (Bob) Objective functions Policies Finding Optimal

More information

Maximum Integer Flows in Directed Planar Graphs with Multiple Sources and Sinks and Vertex Capacities

Maximum Integer Flows in Directed Planar Graphs with Multiple Sources and Sinks and Vertex Capacities Maximum Integer Flows in Directed Planar Graphs with Multiple Sources and Sinks and Vertex Capacities Yipu Wang University of Illinois at Urbana-Champaign ywang298@illinois.edu July 12, 2018 Abstract We

More information

The Ellipsoid (Kachiyan) Method

The Ellipsoid (Kachiyan) Method Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note: Ellipsoid Method 1 The Ellipsoid (Kachiyan) Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

The Smoothed Analysis of Algorithms. Daniel A. Spielman. Program in Applied Mathematics Yale University

The Smoothed Analysis of Algorithms. Daniel A. Spielman. Program in Applied Mathematics Yale University The Smoothed Analysis of Algorithms Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale University Outline Why? Definitions (by formula, by picture, by example) Examples: Perceptron

More information

Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model

Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model Aaron Sidford Stanford University sidford@stanford.edu Mengdi Wang Princeton University mengdiw@princeton.edu

More information

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Introduction to Reinforcement Learning Part 1: Markov Decision Processes Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for

More information

Maximum flow problem

Maximum flow problem Maximum flow problem 7000 Network flows Network Directed graph G = (V, E) Source node s V, sink node t V Edge capacities: cap : E R 0 Flow: f : E R 0 satisfying 1. Flow conservation constraints e:target(e)=v

More information