ne a priority queue for the nodes of G; ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; I Informatik 1 Kurt Mehlhorn

Similar documents
Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Design and Analysis of Algorithms

Single Source Shortest Paths

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Shortest Path Algorithms

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Single Source Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

Breadth-First Search of Graphs

Algorithms: Lecture 12. Chalmers University of Technology

A faster algorithm for the single source shortest path problem with few distinct positive lengths

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

CMPS 2200 Fall Carola Wenk Slides courtesy of Charles Leiserson with small changes by Carola Wenk. 10/8/12 CMPS 2200 Intro.

The Maximum Flow Problem

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

Algorithm Design and Analysis

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Shortest Paths. CS 320, Fall Dr. Geri Georg, Instructor 320 ShortestPaths 3

Analysis of Algorithms. Outline 1 Introduction Basic Definitions Ordered Trees. Fibonacci Heaps. Andres Mendez-Vazquez. October 29, Notes.

Fibonacci Heaps These lecture slides are adapted from CLRS, Chapter 20.

Introduction to Algorithms

Chapter 5 Data Structures Algorithm Theory WS 2017/18 Fabian Kuhn

Discrete Optimization 2010 Lecture 3 Maximum Flows

CS781 Lecture 3 January 27, 2011

Design and Analysis of Algorithms

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

6.889 Lecture 4: Single-Source Shortest Paths

Fundamental Algorithms 11

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

Introduction to Algorithms

22 Max-Flow Algorithms

Introduction to Algorithms

Fibonacci (Min-)Heap. (I draw dashed lines in place of of circular lists.) 1 / 17

Optimal Tree-decomposition Balancing and Reachability on Low Treewidth Graphs

1 Matchings in Non-Bipartite Graphs

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

1 Introduction A priority queue is a data structure that maintains a set of elements and supports operations insert, decrease-key, and extract-min. Pr

Assignment 5: Solutions

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

25. Minimum Spanning Trees

25. Minimum Spanning Trees

Introduction to Algorithms

IS 709/809: Computational Methods in IS Research Fall Exam Review

Trees. A tree is a graph which is. (a) Connected and. (b) has no cycles (acyclic).

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

MA015: Graph Algorithms

A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation

Divide-and-Conquer Algorithms Part Two

Directed Graphs (Digraphs) and Graphs

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

4.8 Huffman Codes. These lecture slides are supplied by Mathijs de Weerd

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS361 Homework #3 Solutions

Algorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)

Algorithm Design and Analysis

Maximum flow problem (part I)

Priority queues implemented via heaps

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Generic ǫ-removal and Input ǫ-normalization Algorithms for Weighted Transducers

FIBONACCI HEAPS. preliminaries insert extract the minimum decrease key bounding the rank meld and delete. Copyright 2013 Kevin Wayne

Running Time. Assumption. All capacities are integers between 1 and C.

CSE548, AMS542: Analysis of Algorithms, Fall 2017 Date: Oct 26. Homework #2. ( Due: Nov 8 )

Matching Residents to Hospitals

Covering Linear Orders with Posets

Lecture 7: Shortest Paths in Graphs with Negative Arc Lengths. Reading: AM&O Chapter 5

Equivalence Relations. Equivalence Classes. A new question. Building Equivalence Classes. CSE 326: Data Structures Disjoint Union/Find

ICS 252 Introduction to Computer Design

A Simple Implementation Technique for Priority Search Queues

Part IA Algorithms Notes

CSE 202 Homework 4 Matthias Springer, A

On the Exponent of the All Pairs Shortest Path Problem

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Query Processing in Spatial Network Databases

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

(v,w) disconnected? bridge? articulatiion point? Update MST during insertion and deletion

Graph-theoretic Problems

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Slides for CIS 675. Huffman Encoding, 1. Huffman Encoding, 2. Huffman Encoding, 3. Encoding 1. DPV Chapter 5, Part 2. Encoding 2

Graph Theorizing Peg Solitaire. D. Paul Hoilman East Tennessee State University

MAKING A BINARY HEAP

Constraint Programming and Graph Algorithms

An Õ m 2 n Randomized Algorithm to compute a Minimum Cycle Basis of a Directed Graph

An Analysis of the Highest-Level Selection Rule in the Preflow-Push Max-Flow Algorithm

BFS Dijkstra. Oct abhi shelat

COMP251: Bipartite graphs

All-Pairs Shortest Paths

Discrete Optimization Lecture 5. M. Pawan Kumar

CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Nov 30. Final In-Class Exam. ( 7:05 PM 8:20 PM : 75 Minutes )

Shortest paths: label setting

Transcription:

n-negative Edge Costs: optimal choice: u U with d(u) minimal. e use a priority queue (de nition on next slide) PQ to maintain the set of pairs, d(u)) ; u U } and obtain Dijkstra s Algorithm: ne a priority queue for the nodes of G; init s) = 0; d(v) = for v s;.insert(s, 0); ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; delete min forall edges e = (u,v) { if ( D = d(u) + c(e) <d(v) ) { if ( d(v) == ) { PQ.insert(v, D); } else { PQ.decrease p(v, D); } } } d(v) = D; me = O(n + m) + init + n (is empty + delete min + insert) + m decrease p I Informatik 1 Kurt Mehlhorn

Priority Queues, The LEDA De nition of the ADT instance Q of the parameterized data type p queue<p, I > is a collection of priority eue nodes. An (abstract) pointer to a priority queue node is called an priority queue m. (type pq item). Every node contains a priority from a linearly ordered type P nction compare(const P&, const P&)) and an information from an arbitrary type I. e use p, i to denote a pq item with priority p and information i. pq item P I node priority queue = I Informatik 2 Kurt Mehlhorn

pq item P I node priority queue = ueue<p, I > Q; creates an instance Q of type p queue<p, I > and initializes it with the empty priority queue. Q.prio(pq item it) Q.inf(pq item it) returns the priority of item it. Precondition: it is an item in Q. returns the information of item it. Precondition: it is an item in Q. item Q.insert(P x, Ii) adds a new item <x, i> to Q and returns it. I Informatik 2 Kurt Mehlhorn

pq item P I node priority queue = item Q. nd min( ) returns an item with minimal priority (nil if Q is empty). Q.del min( ) removes the item it = Q. nd min() from Q and returns the priority of it. Precondition: Q is not empty. id Q.decrease p(pq item it, Px) makes x the new priority of item it. Precondition: it is an item in Q and x is not larger then prio(it). ol Q.empty( ) returns true, if Q is empty, false otherwise. I Informatik 2 Kurt Mehlhorn

The Use of Priority Queues in Dijkstra s Algorithm p_queue<int,node>& PQ; node_array<pq_item> I(G); node v; edge e; forall_nodes(v,g) pred[v] = nil; dist[s] = 0; I[s] = PQ.insert(0,s); while (! PQ.empty()) { pq_item it = PQ.find_min(); node u = PQ.inf(it); int du = dist[u]; } forall_out_edges(e,u) { v = G.target(e); int c = du + cost[e]; if ( v!= s && ( pred[v] == nil c < dist[v] )) { if (pred[v] == nil) I[v] = PQ.insert(c,v); else PQ.decrease_p(I[v],c); dist[v] = c; pred[v] = e; } } PQ.del_item(it); I Informatik 5 Kurt Mehlhorn

Priority Queues: The List Implementation ore the pairs in the queue in a linear list (list< pair<p, I >>), a pq item is a pointer to a t element. e operations on priority queues are realized as follows: t: create an empty list, time O(1). empty: check whether the list is empty, time O(1). ert(p, i): add a list element (p, i) and return a pointer to it, time O(1). dmin( ): search through the list and return a pointer to the element with the smallest iority, time O(size). crease p(it, p): decrease the priority in the list element pointed to by it, time O(1). l item(it): delete the list element pointed to by it, time O(1). eorem 1 With the list implementation of priority queues, Dijkstra s algorithm runs in e O(m + n 2 ). (good for almost complete graphs, m n 2 ) I Informatik 6 Kurt Mehlhorn

Priority Queues: The Heap Implementation eorem 2 With the heap implementation of priority queues, Dijkstra s algorithm runs time O(m log n + n log n). Priority Queues: Radix Heaps eorem 3 With the Radix heap implementation of priority queues, Dijkstra s algorithm ns in time O(m + n log C). This assumes that edge costs are integers in the range.. C]. Priority Queues: Fibonacci Heaps eorem 4 With the Fibonacci heap implementation of priority queues, Dijkstra s orithm runs in time O(m + n log n). Further Priority Queue Implementations Amortized Time Bounds: General Results I Informatik 7 Kurt Mehlhorn

Shortest Paths, Arbitrary Edge Costs eorem 5 (Bellman-Ford algorithm) The Bellman-Ford algorithm solves the shortest th problem in time O(nm). e use the generic algorithm and relax the edges in round robin fashion. s) = 0; d(v) = for v s; ( n times ) { forall edges e = (u,v) E in some arbitrary order { if ( d(u) + c(e) <d(v) ) d(v) = d(u) + c(e); } stop, if no distance label changed; } t d(x) to for all x that can be reached from the target node of a red edge. If µ(v) > and there is a shortest path from s to v consisting of k edges then d(v) = µ(v) after the k-th iteration of the outer loop. Observe k < n. Running time is O(nm). If there is no negative cycle, it is O((1 + k max )m), if there is a negative cycle, it is (nm). correctness in the presence of negative cycles: see next slide I Informatik 8 Kurt Mehlhorn

Correctness of Bellman-Ford µ(v) > and there is a shortest path from s to v consisting of k edges then v) = µ(v) after the k-th iteration of the outer loop. Observe k < n. us d(v) = µ(v) after termination of the do-loop if µ(v) IR {+ }. assume µ(v) =. Then there is a path from s to v containing a negative cycle. At st one edge of this path is red at all times and hence d(v) is set to. nversely, if d(v) is set to, there is an edge (x, y) which is red after termination of do-loop and such that v is reachable from y. e red edge allows to decrease d(y) further and hence d(y) µ(y) when the do-loop minates. us µ(y) = by the rst paragraph. nce v is reachable from y, we also have µ(v) =. I Informatik 9 Kurt Mehlhorn

Shortest Paths, Arbitrary Edge Costs ternative Implementation of Bellman-Ford: We use the revised generic algorithm d avoid to relax edges that are known to be black. s) = 0; d(v) = for v s; U ={s}; U next = ; ( n times ) { forall u U and all edges e = (u,v) E { if (d(u) + c(e) <d(v)) { d(v) = d(u) + c(e); add v to U next ; }} if ( U next == ) { break; } else { U = U next ; U next = ; } } t d(x) to for all x that can be reached from the target node of a red edge. rst case performance is the same, but practical performance is much better, e.g., = 10000, m = 40000, random graph, random positive edge weights orithm above: 0.07 seconds eceding algorithm: 0.25 seconds rform your own experiments with shortest_path_time.c. I Informatik 10 Kurt Mehlhorn

Early Detection of Negative Cycles and Re ned U assume µ(v) < + for simplicity (see book for general case). maintain a rooted tree T with root s (initially T ={s}) for every node v T : tree path from s to v has cost d(v) for every node v T : µ(v) < d(v). U is organized as a queue; only leaves of T are in U. when an edge e = (u,v)is relaxed and d(v) is decreased to d(u) + c(e). v T : make v a child of u in T and add v to U. v T and v is not an ancestor of u: make v a child of u and add v to U. Remove all descendants of v from T and U. v T and v is an ancestor of u: we have discovered a negative cycle and all nodes reachable from v have distance. u correctness: it suf ces to keep those nodes u in U which have an outgoing red edge and satisfy d(u) = µ(u). I Informatik 11 Kurt Mehlhorn

Random Graphs with Non-Negative Edge Weights nerate m random edges (here m = 8n) and assign a random integer weight in [0.. C] r some C (here C = 1000). BASIC = basic version of Bellman-Ford llman Ford = early detection of neg cycles and re ned treatment of U. n BF Basic Bellman Ford Dijkstra Checking 10000 0.3 0.57 0.22 0.31 20000 0.69 1.36 0.57 0.69 40000 1.98 3.59 1.47 1.69 nning time seems to grow only slightly superlinearly. ax is small for random graphs with random edge weights. allenge: Prove it or do a serious experimental study. For complete graphs it is known t k max = O(log n). (Cooper/Frieze/M/Priebe: Random Structures and Algorithms, 00, 33 46). In this PhD-thesis, Priebe extends the result to random graphs with (n log n) edges I Informatik 12 Kurt Mehlhorn

Graphs with Negative Edges but no Negative Cycles generate a (random) graph assign a (random) non-negative (!!!) weight to every edge. assign a (random) potential to every node. set cost of e = (u,v) according to: c(e) = pot(u) + w(e) pot(v). mma 1 This rule does not generate negative cycles. oof: The cost of a cycle with respect to c is the same as with respect to w. n BF Basic Bellman Ford Dijkstra Checking 10000 0.3 0.63 0.3 20000 0.81 1.63 0.7 40000 2.02 3.72 1.68 Running times are as on preceding slide. Dijkstra is not applicable. I Informatik 13 Kurt Mehlhorn

Random Graphs with Random Edge Weights = 8n, random edge weights in [ 100.. 100]. ch graphs are likely to have negative cycles n BF Basic Bellman Ford Dijkstra Checking 2000 20.2 0.08 0.03 4000 73.15 0.17 0.08 8000 462.5 0.54 0.18 Basic has running time (nm), i.e., ubling n approximately quadruples the running time (since m = 8n). re ned version discovers negative cycles early and is much faster. I Informatik 14 Kurt Mehlhorn

A Worst Case Input for Bellman-Ford want that many nodes are dequeued (n) times. Bellman-Ford does essentially breadth- rst search, i.e., discovers paths in the order of their cardinality. if paths of larger cardinality are shorter, nodes will be queued and dequeued many times (give edges length 1). s v our graph consists of two parts. left part is acyclic, has L = 2 l nodes, < 2L edges, all edges have cost 1, and there is a path of cardinality r from s to v for every r, 1 r < L. right part is a complete graph on K nodes, all K 2 edges have cost zero. v is queued L 1 times. Whenever v is dequeued, all nodes in the right part will be queued and in the next round dequeued. running time is (LK 2 ). choose K m and L n. Then O(n) nodes and O(n + m) edges. I Informatik 15 Kurt Mehlhorn

Worst Case Example: Recursive Construction of Left Part G1 = G(l+1) = G(l) has L = 2 l nodes and m l = m l 1 + L/2 + 2 2 l+1 edges. aim: For any r, 1 r < L, there is exactly one path from rst to last node consisting r edges. oof: r = 1: Jump from s to last node of G l 1 + r, where 1 r < L/2: Jump from s to rst node of G l 1 and use induction. L/2 + r, where 1 r < L/2: Walk slowly from s to the rst node of G l 1 and use uction. I Informatik 16 Kurt Mehlhorn

Worst Case Example: Experiments have made edges in rst-part non-negative: (i, j) has cost j i 1. th of cardinality k from node 0 to node L 1 has cost L 1 k and hence longer ths are favored. 8n n BF Basic Bellman Ford Dijkstra Checking 4000 7.52 0.42 0.01001 0.04999 8000 30.66 1.17 0.04004 0.07996 16000 131.5 3.24 0.07001 0.19 Basic has running time (nm), i.e., ubling n approximately quadruples the running time (since m = 8n). re ned version has running (LK) = (n m), since the nodes in the complete aph are removed from the queue without scanning their outgoing edges, running time is ltiplied by about 2.4 2 2, when n is doubled. jkstra is much, much faster. I Informatik 17 Kurt Mehlhorn

All Pairs Shortest Path, Potential Functions naive approach: solve n single source problems, time O(n 2 m) re ned approach: one general single source problem plus n non-negative single source problems, time O(nm + n(m + n log n) = O(nm + n 2 log n). node potential assigns a number pot(v) to each vertex v. For an edge e = (v, w) de ne reduced cost as: c(e) = pot(v) + c(e) pot(w) nsider a path p = [e 0,...,e k 1 ] and let e i = (v i,v i+1 ). Then c(p) = c(e i ) = (pot(v i ) + c(e i ) pot(v i+1 )) 0 i<k = pot(v 0 ) + 0 i<k 0 i<k c(e i ) pot(v k ) = pot(v 0 ) + c(p) pot(v k ), nsequence: If p and q are two paths from v to w. Then c( p) c(q) iff c( p) c(q). ortest paths with respect to c are the same as with respect to c. I Informatik 18 Kurt Mehlhorn

All Pair Shortest Paths: Algorithm sumptions: G has no negative cycles. (MPSS have recently generalized) particularly nice potential function: Let pot(v) = µ(s,v)and assume that all nodes n be reached from s. Then µ(s,v)is nite for all v and hence reduced edge costs are well de ned. Reduced costs are non-negative: For e = (v, w), we have µ(v) + c(e) µ(w) and hence c(e) = µ(s,v)+ c(e) µ(s,w) 0. gorithm: Add a new node s and edges (s,v)of length zero for all v (no new cycles!!). Compute µ(s,v)for all v with Bellman-Ford: Set pot(v) = µ(s,v)and compute reduced costs: time O(nm). time O(m). for every node x solve single source problem with source x and reduced edge costs (Dijkstra): time O(n(m + n log n)). translate distance back to original cost function: µ(v, w) = µ(v, w) + pot(w) pot(v) time O(m) I Informatik 19 Kurt Mehlhorn