Data Structures and and Algorithm Xiaoqing Zheng

Similar documents
CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

Introduction to Algorithms

Introduction to Algorithms

Introduction to Algorithms

Single Source Shortest Paths

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Design and Analysis of Algorithms

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

Introduction to Algorithms

CMPS 2200 Fall Carola Wenk Slides courtesy of Charles Leiserson with small changes by Carola Wenk. 10/8/12 CMPS 2200 Intro.

Single Source Shortest Paths

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Lecture Notes for Chapter 25: All-Pairs Shortest Paths

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis

Design and Analysis of Algorithms

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

General Methods for Algorithm Design

Computing Connected Components Given a graph G = (V; E) compute the connected components of G. Connected-Components(G) 1 for each vertex v 2 V [G] 2 d

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Analysis of Algorithms I: All-Pairs Shortest Paths

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

All-Pairs Shortest Paths

Shortest Path Algorithms

Myriad of applications

Introduction to Algorithms

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Data Structures for Disjoint Sets

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Maximum Flow. Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai

CMPSCI 611: Advanced Algorithms

Maximum Flow. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Maximum Flow 1 / 27

Shortest Paths. CS 320, Fall Dr. Geri Georg, Instructor 320 ShortestPaths 3

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

Discrete Optimization 2010 Lecture 3 Maximum Flows

Breadth-First Search of Graphs

ICS 252 Introduction to Computer Design

1 Review for Lecture 2 MaxFlow

BFS Dijkstra. Oct abhi shelat

Algorithm Design and Analysis

Algorithms: Lecture 12. Chalmers University of Technology

Matching Residents to Hospitals

Graph Theory. Lecture Notes Winter 2014/2015. Christian Schindelhauer

Algorithms and Theory of Computation. Lecture 11: Network Flow

Types of Networks. Internet Telephone Cell Highways Rail Electrical Power Water Sewer Gas

Lecture 2: Network Flows 1

Flow Network. The following figure shows an example of a flow network:

Algorithm Design and Analysis

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

CSE 332. Data Abstractions

Internet Routing Example

CS 161: Design and Analysis of Algorithms

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

CS781 Lecture 3 January 27, 2011

10 Max-Flow Min-Cut Flows and Capacitated Graphs 10 MAX-FLOW MIN-CUT

Disjoint Sets : ADT. Disjoint-Set : Find-Set

IS 709/809: Computational Methods in IS Research Fall Exam Review

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

COMP251: Bipartite graphs

6.046 Recitation 11 Handout

Greedy Algorithms and Data Compression. Curs 2018

Partha Sarathi Mandal

AC64/AT64/ AC115/AT115 DESIGN & ANALYSIS OF ALGORITHMS

SI545 VLSI CAD: Logic to Layout. Algorithms

IS 2610: Data Structures

Graph Theory and Optimization Computational Complexity (in brief)

22 Max-Flow Algorithms

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Space Complexity of Algorithms

Running Time. Assumption. All capacities are integers between 1 and C.

25. Minimum Spanning Trees

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

25. Minimum Spanning Trees

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS60007 Algorithm Design and Analysis 2018 Assignment 1

Directed Graphs (Digraphs) and Graphs

Query Processing in Spatial Network Databases

NP-complete problems. CSE 101: Design and Analysis of Algorithms Lecture 20

Computer Science & Engineering 423/823 Design and Analysis of Algorithms

1 Some loose ends from last time

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

CS 580: Algorithm Design and Analysis

from notes written mostly by Dr. Matt Stallmann: All Rights Reserved

5 Flows and cuts in digraphs

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

CS 138b Computer Algorithm Homework #14. Ling Li, February 23, Construct the level graph

MA015: Graph Algorithms

A faster algorithm for the single source shortest path problem with few distinct positive lengths

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

GRAPH ALGORITHMS Week 3 (16-21 October 2017)

Question Paper Code :

CSE 421 Introduction to Algorithms Final Exam Winter 2005

Transcription:

Data Structures and Algorithm Xiaoqing Zheng zhengxq@fudan.edu.cn

Representations of undirected graph 1 2 1 2 5 / 5 4 3 2 3 4 5 1 3 4 / 2 4 / 2 5 3 / 4 1 / Adjacency-list representation of graph G = (V, E)

Representations of undirected graph 1 2 1 2 3 4 5 1 0 1 0 0 1 3 2 1 0 1 1 0 3 0 1 0 1 0 5 4 4 0 1 1 0 1 5 1 0 0 1 0 Adjacency-matrix representation of graph G = (V, E)

Representations of directed graph 1 2 3 1 2 4 / 2 5 / 4 5 3 4 6 5 2 / 6 5 4 / 6 6 / / Adjacency-list representation of graph G = (V, E)

Representations of directed graph 1 2 3 4 5 6 1 2 3 4 5 6 1 0 1 0 1 0 0 2 0 0 0 0 1 0 3 0 0 0 0 1 1 4 0 1 0 0 0 0 5 0 0 0 1 0 0 6 0 0 0 0 0 1 Adjacency-matrix representation of graph G = (V, E)

Graphs Definition. A directed d graph (digraph) h G = (V, E) is an ordered pair consisting of a set V of vertices (singular: vertex), a set E V V of edges. In an undirected graph G = (V, E), the edge set E consists of unordered pairs of vertices. In either case, we have E = O(V 2 ). Moreover, if G is connected, then E V 1.

Representations of graph Adjacency-list representation An adjacency list of a vertex v V is the list Adj[v] of vertices adjacent to v. For undirected graphs, Adj[v] = degree(v). For directed graphs, Adj[v] = out-degree(v). Adjacency-matrix representation The adjacency matrix of a graph G = (V, E), where V = {1, 2,, n}, is the matrix A[1 n, 1 n] given by 1 if (i, j) E, A[i, j] = 0 if (i, j) E.

Breadth-first search Given a graph G = (V, E) and a distinguished ished source vertex s, breadth-first search systematically explores the edges of G to "discover" every vertex that is reachable from s. r s t u 0 v w x y

Breadth-first search example r s t u 0 v w x y Q s 0 Gray vertices

Breadth-first search example r s t u 1 0 Q w r 1 1 1 Gray vertices v w x y

Breadth-first search example r s t u 1 0 2 Q r t x 1 2 2 1 2 Gray vertices v w x y

Breadth-first search example r s t u 1 0 2 Q t x v 2 2 2 2 1 2 Gray vertices v w x y

Breadth-first search example r s t u 1 0 2 3 Q x v u 2 2 3 2 1 2 Gray vertices v w x y Why not x?

Breadth-first search example r s t u 1 0 2 3 Q v u y 2 3 3 2 1 2 3 Gray vertices v w x y

Breadth-first search example r s t u 1 0 2 3 Q u y 3 3 2 1 2 3 Gray vertices v w x y

Breadth-first search example r s t u 1 0 2 3 2 1 2 3 Gray vertices v w x y Q y 3

Breadth-first search example r s t u 1 0 2 3 Q 2 1 2 3 Gray vertices v w x y

Breadth-first search algorithm BFS(G, s) ) 1. for each vertex u V[G] {s} 2. do color[u] [ ] WHITE 3. d[u] 4. π[u] [ ] NIL 5. color[s] GRAY 6. d[s] ] 0 7. π[s] NIL 8. Q 9. ENQUEUE(Q, s)

Breadth-first search algorithm BFS(G, s) ) 10. while Q 11. do u DEQUEUE(Q) 12. for each v Adj[u] 13. do if color[v] [ ] = WHITE 14. O(E) then color[v] GRAY 15. times d[v] ] d[u] ] +1 16. π[v] u 17. ENQUEUE(Q, v) ) 18. color[u] BLACK Running time is O(V + E) O(V) times

Shortest paths PRINT-PATH(G, PATH(G s, v) ) 1. if v = s 2. then print s 3. else if π[v] = NIL 4. then print t" "no path thf from" s "to"" v "exists." " 5. else PRINT-PATH(G, s, π[v]) 6. print v

Predecessor subgraph of BFS s w r t u x y v Predecessor subgraph of BFS is G π = (V π, E π ), where V π ={ v V : π[v] NIL } {s} and E π ={ (π[v], v): v V π {s} }.

Depth-first search Given a graph G = (V, E), depth-first search is to search deeper in the graph whenever possible. Edges are explored out of the most recently discovered vertex v that still has unexplored edges leaving it. u v w x y x

Depth-first search example u v w 11/12 12/12 12/12 12/12 12/12 12/12 x y x TOP u S

Depth-first search example u v w 11/12 12/12 12/12 12/12 12/12 12/12 x y x TOP v u S

Depth-first search example u v w 11/12 12/12 12/12 12/12 13/12 12/12 x y x TOP y v u S

Depth-first search example u v w 11/12 12/12 12/12 TOP x y x y x v u 14/12 13/12 12/12 S

Depth-first search example u v w 11/12 12/12 12/12 B 14/52 13/12 12/12 x y x TOP x y v u S

Depth-first search example u v w 11/12 12/12 12/12 B y x y x TOP v u 14/52 13/62 12/12 S

Depth-first search example u v w 11/12 12/72 12/12 B 14/52 13/62 12/12 x y x v TOP u S

Depth-first search example u v w 11/82 12/72 12/12 F B 14/52 13/62 12/12 x y x TOP u S

Depth-first search example u v w 11/82 12/72 19/12 F B 14/52 13/62 12/12 x y x TOP w S

Depth-first search example u v w 11/82 12/72 19/12 F B 14/52 13/62 10/12 x y x TOP x w S

Depth-first search example u v w 11/82 12/72 19/12 F B 14/52 13/62 10/11 B x y x x TOP w S

Depth-first search example u v w 11/82 12/72 19/12 F B C 14/52 13/62 10/11 B x y x TOP w S

Depth-first search example u v w 11/82 12/72 19/12 F B 14/52 13/62 10/11 C B x y x TOP S

Depth-first search algorithm DFS(G) 1. for each vertex u V[G] 2. do color[u] [ ] WHITE 3. π[u] NIL 4. time 0 5. for each vertex u V[G] 6. do if color[u] [ ] = WHITE 7. then DFS-VISIT(u) O(V) ( ) times

Depth-first search algorithm DFS-VISIT(u) ) 1. color[u] GRAY 2. time time +1 3. d[u] time 4. for each vertex v Adj[u] ] 5. do if color[v]=white 6. then π[v] [ ] u 7. DFS-VISIT(v) 8. color[u] [ ] BLACK 9. f[u] time time + 1 Running time is O(V + E) O(E) times

Predecessor subgraph of DFS u w v C z B F B y Predecessor subgraph of DFS is G π = (V, E π ), where x E π ={ (π[v], v): v V and π[v] NIL }.

Predecessor subgraph of DFS u w Each edge (u, v) can be classified by the color of the B vertex v that is reached when C v z the edge is first explored: WHITE indicates a tree edge; F B GRAY indicates a back edge; y BLACK indicates a forward (if d[u] < d[v]) or cross edge x (if d[u] > d[v]).

Precedence among events Underwear Socks Watch Pants Shoes Shirt Belt Tie Jacket

Precedence among events Underwear Socks Watch 11/16 17/18 19/10 Pants Shoes 12/15 13/14 11/86 Shirt Belt Tie 16/76 12/56 13/46 Jacket Call depth first search algorithm.

Precedence among events Underwear Socks Watch 11/26 17/11 19/50 Pants Shoes 12/35 13/44 11/66 Shirt Belt Tie 16/76 12/86 Sort tby finishing i time. 13/96 Jacket

Curriculum Java or C++ Data Structure and Algorithm Object Oriented Programming Software Engineering Web Application Database Computer Systems Internship Thesis Computer Architecture Calculus ALL COURSES Project Management Intelligent Systems Computer Network Probability and Statistics Discrete Mathematics

Topological sort TOPOLOGICAL-SORT(G) 1. call DFS(G) to compute finishing times f[v] for each vertex v. 2. as each vertex is finished, insert it onto the front of a linked list. 3. return the linked list of vertices.

Topological sort A topological lsort of a directed acyclic graph or "dag" d G = (V, E) is a linear ordering of all its vertices such that if G contains an edge (u, v), then u appears before v in the ordering. Topological sort of a graph can be viewed as an linear ordering of its vertices. Topological lsorting is different from the usual lkind of "sorting" studied before. If the graph is not acyclic, then no linear ordering is possible.

Disjoint sets a b e h f j Connected components c d g i Edge processed Collection of disjoint sets initial sets {a} {b} {c} {d} {e} {f} {g} {h} {i} {j}

Disjoint sets a b e h c d g i Edge processed f j Connected components Collection of disjoint sets initial sets {a} {b} {c} {d} {e} {f} {g} {h} {i} {j} (b, d) {a} {b, d} {c} {e} {f} {g} {h} {i} {j}

Disjoint sets a b e h c d g i f j Connected components Edge processed Collection of disjoint sets initial sets (b, d) (e, g) {a} {a} {a} {b} {b, d} {b, d} {c} {c} {c} {d} {e} {e} {e, g} {f} {f} {f} {g} {g} {h} {h} {h} {i} {i} {i} {j} {j} {j}

Disjoint sets a b e h c d g i f j Connected components Edge processed Collection of disjoint sets initial sets (b, d) (e, g) (a, c) {a} {a} {a} {a, c} {b} {b, d} {b, d} {b, d} {c} {c} {c} {d} {e} {e} {e, g} {e, g} {f} {f} {f} {f} {g} {g} {h} {h} {h} {h} {i} {i} {i} {i} {j} {j} {j} {j}

Disjoint sets a b e h c d g i f j Connected components Edge processed Collection of disjoint sets initial sets (b, d) (e, g) (a, c) (h, i ) (a, b) (e, f ) (b, c) {a} {a} {a} {a, c} {a, c} {a, b, c, d} {a, b, c, d} {a, b, c, d} {b} {c} {b, d} {c} {b, d} {c} {b, d} {d} {e} {e} {e, g} {e, g} {b, d} {e, g} {e, g} {e, f, g} {e, f, g} {f} {f} {f} {f} {f} {f} {g} {g} {h} {h} {h} {h} {h, i} {h, i} {h, i} {h, i} {i} {i} {i} {i} {j} {j} {j} {j} {j} {j} {j} {j}

Disjoint set operations MAKE-SET(x) ( ) creates a new set whose only member is x. UNION(x, ( y) ) unites the dynamic sets that contain x and y, say S x and S y, into a new set that is the union of these two sets. The two sets are assumed to be disjoint prior to the operation. FIND-SET(x) ) returns a pointer to the representative of the unique set containing x.

Linked list representation of disjoint sets a b c d / h i / e f g / j UNION(a, e) a b c d e f g /

Analysis of linked list representation Linked list representation of the UNION operation requires an average of Φ(n) time per call because we may be appending a longer list onto a shorter list. Weighted-union heuristic: Suppose that each list also includes the length of the list and that we always append the smaller list onto the longer. A sequence of m MAKE-SET, UNION, and FIND-SET operations, n of whichare MAKE-SET operations, takes O(m + nlgn) time.

Disjoint set forests c e h h UNION(e, h) i f e i a d g c f b a d g b

Union by rank c e f h i UNION(e, h) e c f h a d g a d g i b b

Path compression e c f h FIND-SET(b) ( ) e a b c f h a d g i d g i b

Disjoint set forests MAKE-SET(x) ) 1. p[x] x 2. rank[x] ] 0 FIND-SET(x) 1. if x p[x] 2. then p[x] FIND-SET(p[x]) 3. return p[x]

Disjoint set forests UNION(x, y) ) 1. LINK(FIND-SET(x), FIND-SET(y)) LINK(x, y) 1. if rank[x] > rank[y] 2. then p[y] x 3. else p[x] y 4. if rank[x] = rank[y] 5. then rank[y] rank[y] + 1

Union by rank and path compression When we use both union by rank and path compression, the worst-case running time is O(m α(n)), where α(n) is a very slowly l growing function. 0 for 0 n 2, 1 for n = 3, α(n) ( ) 2 for 4 n 7, 3 for 8 n 2047, 4 for 2047 n A 4 (1) >> 10 80. A k (j) j + 1 if k = 0, A 1 ( j) ( j+ 1) k if k 1.

Minimum spanning tree 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Minimum spanning tree 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10 Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37

Minimum spanning tree Input: A connected, undirected d graph G = (V, E) with weight function w: E. Output: A spanning tree T a tree that connects all vertices of minimum weight: wt ( ) = wuv (, ) ( uv, ) T

Destroy cycles 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Destroy cycles 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Destroy cycles a 4 8 11 8 7 b c d 7 i 2 6 4 h g f 1 2 9 e 10

Destroy cycles a 4 8 11 8 7 b c d 7 i 2 6 4 h g f 1 2 9 e 10

Destroy cycles a 4 8 11 8 7 b c d 7 i 2 6 4 h g f 1 2 9 e

Destroy cycles a 4 8 11 8 7 b c d 7 i 2 6 4 h g f 1 2 9 e

Destroy cycles a 4 11 8 7 b c d 7 i 2 6 4 9 e h g f 1 2

Destroy cycles a 4 11 8 7 b c d 7 i 2 6 4 9 e h g f 1 2

Destroy cycles a 4 11 8 7 b c d i 2 6 4 9 e h g f 1 2

Destroy cycles a 4 11 8 7 b c d i 2 6 4 9 e h g f 1 2

Destroy cycles a 4 11 8 7 b c d i 2 4 9 e h g f 1 2

Destroy cycles a 4 11 8 7 b c d i 2 4 9 e h g f 1 2

Destroy cycles a 4 8 7 b c d i 2 4 9 e h g f 1 2 Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37

Avoid cycles 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10 Total weight = 1 + 2 + 2 + 4 + 4 + 7 + 8 + 9 = 37

Kruskal's algorithm MST-KRUSKAL(G, w) ) 1. A 2. for each vertex v V [G] 3. do MAKE-SET(v) 4. sort the edges of E into nondecreasing order by weight w. 5. for each edge (u, v) E, taken in nondecreasing order by weight. 6. do iffind FIND-SET(u) FIND-SET(v) 7. then A A {(u, v)} 8. UNION(u, v) 9. return A Running time is O(ElgE) E)

Optimal substructure Minimum spanning tree T of G = (V, E).

Optimal substructure Minimum spanning tree T of G = (V, E). u Remove any edge (u, v) T. v

Optimal substructure Minimum spanning tree T of G = (V, E). T 1 v Remove any edge (u, v) T. Then, T is partitioned into two subtrees T 1 and T 2. Theorem. The subtree T 1 is an MST of G 1 = (V 1, E 1 ), the subgraph of G induced by the vertices of T 1 : V 1 = vertices of T 1, E 1 ={(x (x, y) E: x, y V 1 }. Similarly for T 2. u T 2

Proof of optimal substructure Proof. Cut and paste: w(t) ( ) = w(u, ( v) ) + w(t( 1 ) + w(t( 2 ). If T 1 ' were a lower-weight spanning tree than T 1 for G 1, then T '={(u, v)} T 1 ' T 2 would be a lower-weight weight spanning tree than T for G. Do we also have overlapping subproblems? Great, then dynamic programming may work! Yes! Yes! But minimum spanning tree exhibits another powerful property p which leads to an even more efficient algorithm.

Hallmark for "greedy" algorithms Greedy-choice property A locally optimal choice is globally optimal. Theorem. Let T be the minimum spanning tree of G =(V (V, E), and let A V. Suppose that (u, v) E is the least-weight edge connecting A to V A. Then, (u, v) T.

Proof of theorem Proof. Suppose (u, v) ) T. Cut and paste. T: v A T A u (u, v)=least-weight edge connecting A to V A.

Proof of theorem Proof. Suppose (u, v) ) T. Cut and paste. T: v A T A u (u, v)=least-weight edge connecting A to V A. Consider the unique simple path from u to v in T.

Proof of theorem Proof. Suppose (u, v) ) T. Cut and paste. T: v A T A u (u, v)=least-weight edge connecting A to V A. Consider the unique simple path from u to v in T. Swap (u, v) ) with the first edge on this path that connects a vertex in A to a vertex in V A.

Proof of theorem Proof. Suppose (u, v) ) T. Cut and paste. T: v A T A u (u, v)=least-weight edge connecting A to V A. Consider the unique simple path from u to v in T. Swap (u, v) ) with the first edge on this path that connects a vertex in A to a vertex in V A. A lighter-weight spanning tree than T results.

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Example of Prim's algorithm 4 8 7 b c d 2 9 a 8 11 7 i 6 4 14 h g f 1 2 e 10

Prim's algorithm IDEA: Miti Maintain V A as a priority it queue Q. Key each vertex in Q with the weight of the least weight edge connecting it to a vertex in A. MST-PRIM(G, w, r) Minimum spanning tree A for G is 1. for each u V[G] thus A = { (v, π[v]): v V {r}} 2. do key[u] 3. π[u] NIL 4. key[r] 6. while Q 0 5. Q V[G] 7. do u EXTRACT-MIN(Q) 8. for each v Adj[u] 9. do if v Q and w(u, v) < key(v) 10. then π[v] [ ] u 11. key[v] w(u, v)

Analysis of Prim algorithm MST-PRIM(G PRIM(G, w, r) 1. for each u V[G] 2. do key[u] Φ(V) 3. π[u] NIL total 4. key[r] 0 5. Q V[G] 6. while Q 7. do u EXTRACT-MIN(Q) 8. for each v Adj[u] 9. degree(u) do if v Q and w(u, v) < key(v) V times 10. times then π[v] u 11. key[v] w(u, v) Θ(E) implicit DECREASE-KEY's. Time = Θ(V) Time EXTRACT-MIN + Θ(E) Time DECREASE-KEY

Analysis of Prim algorithm Time = Θ(V) Time EXTRACT-MIN + Θ(E) Time DECREASE-KEY Q Time EXTRACT-MIN Time DECREASE-KEY Total Array O(V) O(1) O(V 2 ) Binary heap O(lgV) ( ) O(lgV) ( ) O(ElgV) ( g ) Running time of Prim's algorithm is O(ElgV )

Pudong Shanghai

Shortest paths t 1 x 10 9 s 2 3 4 6 7 5 y 2 z

Shortest paths problem Consider a directed graph G = (V, E), with weight function w: E mapping edges to real-valued weights. The weight of path p = v 1 v 2 v k is defined to be k wp ( ) = wv ( i 1, vi) i= 1 We define the shortest path weight from u to v by p min{w(p): u v} If there is a path from u to v, δ ( uv, ) = Otherwise.

Optimal substructure Theorem. A subpath of a shortest path is a shortest path. Proof. Cut and paste: u v x y

Single-source shortest paths Problem. From a given source vertex s V, find the shortest-path weights δ(s, v) for all v V. If all edge weights w(u, v) are nonnegative, all shortest-path weights must exist. IDEA: Greedy. 1. Maintain a set S of vertices whose shortest path distances from s are known. 2. At each step add to S the vertex v V S whose distance estimate from s is minimal. 3. Update the distance estimates of vertices adjacent to v.

Example of Dijkstra's algorithm t 10 1 9 x s 0 2 3 4 6 7 5 y 2 z

Example of Dijkstra's algorithm t 10 1 9 x s 0 2 3 4 6 7 5 y 2 z

Example of Dijkstra's algorithm t 1 10 10 9 s 0 2 3 4 6 x 7 5 5 2 y z

Example of Dijkstra's algorithm t 8 10 1 9 x 14 s 0 2 3 4 6 7 5 5 7 2 y z

Example of Dijkstra's algorithm t 8 10 1 9 x 13 s 0 2 3 4 6 7 5 5 7 2 y z

Example of Dijkstra's algorithm t 1 8 9 10 9 s 0 2 3 4 6 x 7 5 5 7 2 y z

Example of Dijkstra's algorithm t 1 8 9 10 9 s 0 2 3 4 6 x 7 5 5 7 2 y z

Dijkstra's algorithm DIJKSTRA(G, w, s) 1. for each vertex v V[G] 2. do d[v] [ ] 3. π(v) NIL 4. d[s] 0 5. S 6. Q V[G] 7. while Q 8. do u EXTRACT-MIN(Q) 9. S S {u} 10. for each vertex v Adj[u] 11. do if d[v] > d[u] + w(u, v) 12. then d[v] d[u] + w(u, v) 13. π(v) u Relaxation step. Implicit DECREASE-KEY.

Correctness of Dijkstra's algorithm Lemma. Initializing d[s] 0 and d[v] for all v V {s} establishes d[v] δ(s, v) for all v V, and this invariant is maintained over any sequence of relaxation steps. Proof. Suppose not. Let v be the first vertex for which d[v] < δ(s, v), and let u be the vertex that caused d[v] to change:d[v] =d[u] +w(u w(u, v). Then, d[v] < δ(s, v) supposition δ(s, u)+δ(u + δ(u, v) triangle inequality δ(s, u) + w(u, v) sh. path specific path d[u]+w(u + w(u, v) v is first violation Contradiction

Correctness of Dijkstra's algorithm Lemma. Let u be v's s predecessor on a shortest path from s to v. Then, if d[u] = δ(s, u) and edge (u, v) is relaxed, we have d[v]=δ(s = δ(s, v) after the relaxation. Proof. Observe that δ(s, v) = δ(s, u) + w(u, v). Suppose that d[v]>δ(s > δ(s, v) before the relaxation. (Otherwise, we're done.) Then, the test d[v] > d[u] + w(u, v) succeeds, because d[v] > δ(s, v)=δ(s = δ(s, u)+ + w(u, v) = d[u] + w(u, v), and the algorithm sets d[v]=d[u]+w(u = d[u] + w(u, v)=δ(s = δ(s, v).

Correctness of Dijkstra's algorithm Theorem. Dijkstra's algorithm terminates with d[v] = δ(s, v) for all v V. Proof. It suffices to show that d[v] = δ(s, v) for every v V when v is added to S. Suppose u is the first vertex added to S for which d[u] > δ(s, u). Let y be the first vertex in V S along a shortest path from s to u, and let x be its predecessor: p 2 u Just tbefore adding u. s S p 1 x y

Correctness of Dijkstra's algorithm s S p 1 x p 2 y u Since u is the first vertex violating the claimed invariant, we have d[x] = δ(s, x). When x was added to S, the edge (x, y) was relaxed, which implies that d[y] = δ(s, y) δ(s, u) < d[u]. But, d[u] d[y] by our choice of u. Contradiction.

Analysis of Dijkstra's algorithm DIJKSTRA(G, w, s) 1. for each vertex v V[G] 2. do d[v] [ ] 3. π(v) NIL 4. d[s] 0 5. S 6. Q V[G] 7. while Q Time = Θ(V) Time EXTRACT-MIN + Θ(E) Time DECREASE-KEY 8. do u EXTRACT-MIN(Q) 9. S S {u} 10. for each vertex v Adj[u] 11. degree(u) do if d[v] > d[u] + w(u, v) 12. times then d[v] d[u] + w(u, v) 13. π(v) u V times

Analysis of Dijkstra's algorithm Time = Θ(V) Time EXTRACT-MIN + Θ(E) Time DECREASE-KEY Q Time EXTRACT-MIN Time DECREASE-KEY Total Array O(V) O(1) O(V 2 ) Binary heap O(lgV) ( ) O(lgV) ( ) O(ElgV) ( g ) Running time of Dijkstra'ss algorithm is O(ElgV )

Unweighted graphs Suppose that w(u, ( v) ) = 1for all (u, v) ) E. Can Dijkstra's algorithm be improved? Breadth-first t search. Use a simple FIFO queue instead of a priority queue. 1. while Q 2. do u DEQUEUE(Q) 3. for each vertex v Adj[u] 4. do if d[v] = 5. then d[v] d[u] + 1 6. ENQUEUE(Q, v) Running time is O(V + E).

Example of breadth-first search t x Q s s 0 0 Gray vertices y z

Example of breadth-first search t x 1 Q t y s 0 1 1 Gray vertices 1 y z

Example of breadth-first search t x 1 2 Q y x s 0 1 2 Gray vertices 1 y z

Example of breadth-first search t x 1 2 Q x z s 0 2 2 Gray vertices 1 2 y z

Example of breadth-first search t x 1 2 Q z s 0 2 1 2 y z Gray vertices

Example of breadth-first search t x 1 2 Q s 0 Gray vertices 1 2 y z

Shortest paths in weighted dag What is dag? A dag is a directed acyclic graph. DAG-SHORTEST-PATH(G, w, s) 1. Topologically sort the vertices of G 2. for each vertex v V[G] 3. do d[v] 4. π(v) NIL 5. d[s] 0 6. for each vertex u, taken in topologically sorted order 7. do for each vertex v Adj[u] 8. do if d[v] > d[u] + w(u, v) 9. then d[v] d[u] +w(u w(u, v) 10. π(v) u

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 2 6 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 2 6 6 4 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 2 6 5 4 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 2 6 5 3 3 4 2

Example of dag shortest paths 6 1 r s t x y z 5 2 7 1 2 0 2 6 5 3 3 4 2

Positive-weight cycle u > 0 v Shortest path cannot contain a Shortest path cannot contain a positive-weight cycle.

0-weight cycle = 0 u v 0-weight cycle can be removed 0 weight cycle can be removed from the shortest path.

Negative-weight cycle u < 0 v If a graph contains a negative- weight cycle, then some shortest paths may not exist.

Algorithm for negative weight cycle Bll F d l i h Bellman-Ford algorithm: Finds all shortest-path lengths from a source s V to all v V or determines that a negative-weight cycle exists.

Example of Bellman-Ford algorithm t 5 x 2 6 3 s 0 8 44 7 7 2 y 9 z Initialization.

Example of Bellman-Ford algorithm t 1 5 x 4 2 9 6 5 3 s 0 2 8 3 44 7 7 10 7 8 2 y 6 9 z Order of edge relaxation.

Example of Bellman-Ford algorithm t 1 5 x 4 2 9 6 5 3 s 0 2 8 3 44 7 7 10 7 8 2 y 6 9 z

Example of Bellman-Ford algorithm t 1 5 x 4 2 9 6 5 3 s 0 2 8 3 44 7 7 10 7 8 2 y 6 9 z

Example of Bellman-Ford algorithm t 1 5 x 6 4 2 9 6 5 3 s 0 2 8 3 44 7 7 10 7 8 2 y 6 9 z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 y z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 y z End of pass 1.

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 11 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 y z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 11 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 2 y z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 4 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 2 y z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 6 4 2 4 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 2 y z End of pass 2.

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 2 4 2 4 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 2 y z

Example of Bellman-Ford algorithm t 1 5 x s 0 9 6 2 4 2 4 5 3 2 8 3 44 7 7 10 7 8 2 7 6 9 2 y z End of pass 3.

Example of Bellman-Ford algorithm t 1 5 x 2 4 2 4 9 6 5 3 s 0 2 8 3 44 7 7 10 7 7 y 8 2 6 9 2 z

Example of Bellman-Ford algorithm t 1 5 2 4 2 4 9 6 5 3 2 8 3 44 s 0 7 7 x 10 7 7 y 8 2 6 9 2 z End of pass 4.

Example of Bellman-Ford algorithm t 1 5 2 4 2 4 9 6 5 3 2 8 3 44 s 0 7 7 x 10 7 7 y 8 2 6 9 2 z End

Bellman-Ford algorithm BELLMAN-FORD(G, w, s) 1. for each vertex v V[G] 2. do d[v] [ ] 3. π(v) NIL 4. d[s] 0 5. for i 1 to V[G] 1 6. do for each edge (u, v) E[G] 7. do if d[v] > d[u]+w(u + w(u, v) 8. then d[v] d[u] + w(u, v) 9. π(v) ( ) u 10. for each edge (u, v) E[G] 11. do if d[v] > d[u] + w(u, v) 12. then return FALSE 13. return TURE

Correctness of Bellman-Ford algorithm Theorem. If G = (V, E) contains no negative e weight cycles, then after the Bellman-Ford algorithm executes, d[v] ] = δ(s, v) ) for all v V. Proof. Let v V be any vertex, and consider a shortest path p from s to v with the minimum number of edges. p: s v v 0 v 1 v 2 v 3 v k Since p is a shortest path, we have δ(s, v i )=δ(s δ(s, v i 1 )+w(v i 1, v i ).

Correctness of Bellman-Ford algorithm p: s v v 0 v 1 v 2 v 3 v k Initially, d[v 0 ] = 0 = δ(s, v 0 ), After 1 pass through E,wehaved[v 1 ] = δ(s, v 1 ). After 2 passes through E, we have d[v 2 ] = δ(s, v 2 ).. After k passes through E, we have d[v k ] = δ(s, v k ). Since G contains no negative-weight g cycles,,pp is simple. Longest simple path has V 1 edge If a value d[v] fails to converge after V 1 passes, there exists a negative-weight cycle in G reachable from s.

Correctness of Bellman-Ford algorithm Conversely, suppose that graph G contains a negative- e weight cycle that is reachable from the source s; let this cycle be c = v 0 v 1 v k, where v 0 = v k. Then, k wv ( i 1, v i) < i= 1 (, ) 0 If the Bellman-Ford algorithm returns TRUE, then, d[v i ] d[v i 1 ] + w(v i 1, v i ) for i = 1, 2,, k, and k k 1 1 d [ v ] ( d [ v ] + w ( v, v )) i i i i i= 1 i= 1 k k = + d [ v ] w ( v, v ) i 1 i 1 i i= 1 i= 1

Correctness of Bellman-Ford algorithm k k k dv [ ] dv [ ] + wv (, v) i i 1 i 1 i i= 1 i= 1 i= 1 Since v 0 = v k, and so, k k dv [ i] = dv [ i 1], thus, i= 1 i= 1 k 0 wv (, v) contradicts with wv ( 1, v) < 0 i= 1 i 1 i k i= 1 i i

Example of negative-weight cycle x t 3 4 4 5 0 s 2 8 1 2 y Initialization and order of edge relaxation.

Example of negative-weight cycle x t 3 4 4 5 5 0 s 2 8 1 2 y

Example of negative-weight cycle x t 3 4 4 5 5 0 s 2 8 1 2 y End of pass 1.

Example of negative-weight cycle x t 3 4 4 5 9 5 0 s 2 8 1 2 y

Example of negative-weight cycle x t 3 4 4 5 9 5 0 s 2 8 1 2 y End of pass 2.

Example of negative-weight cycle x t 3 4 4 5 9 5 0 s 2 8 1 2 1 y

Example of negative-weight cycle x t 3 4 4 5 9 5 0 s 2 8 1 2 1 y End of pass 3.

Example of negative-weight cycle x t 3 4 4 5 9 5 0 s 2 8 1 2 1 y End. 5 > 2 +1

Single-source shortest-paths algorithms Algorithms Unweighted Positive Negative Cycle Time Breadth-first search O(V + E) Dag shortest paths O(V + E) Dijkstra O(ElgV) Bellman- O(VE) Ford ( )

All-pairs shortest paths Input: Digraph G = (V, E), where V = {1, 2,, n}, with edge-weight function w: E. Output: n n matrix of shortest-path lengths δ(i, j) for all i, j V. IDEA: Run Bellman-Ford once from each vertex. Time = O(V 2 E). Dense graph (n 2 edges) Θ(n 4 ) time in the worst case.

Dynamic programming Consider the n n adjacency matrix A = (a ij ) of the digraph, and define d (m) ij = weight of a shortest path from i to j that uses at most m edges. Claim: We have d ij (0) = 0 if i = j, if i j. and for m = 1, 2,, n 1, d (m) = (m 1) ij min k {d ik + a kj }.

Proof of claim d ij (m) = min k {d ik (m 1) + a kj }. k's Relaxation! for k 1 to n do if d ij > d ik + a kj then d ij d ik + a kj i. j m 1 edges Note: No negative-weight cycles implies δ(i, j) = d (m 1) ij = d (m) ij = d (m + 1) ij =

All-pairs shortest paths example 3 2 0 3 8 4 4 0 1 7 (1) D = 4 0 2 5 0 2 1 5 7 6 0 8 1 3 4 5 4 6 1 1 1 2 2 (1) Π = 3 4 4 5

All-pairs shortest paths example 3 2 0 3 8 4 4 0 1 7 (1) D = 4 0 2 5 0 2 1 5 7 6 0 8 1 3 4 5 4 6 d (m) ij = min k {d (m 1) ik + a kj } d 14 (2) = min k {d ik (1) + a kj }. d 14 (2) = min{(d 11 (1) + a 14 ), (d 12 (1) + a 24 ), (d 13 (1) + a 34 ), (d 14 (1) + a 44 ), (d 15 (1) + a 54 )}. d 14 (2) = min{(0 + ), (3 + 1), (8 + ), ( + 0), ( 4 + 6)}= 2

All-pairs shortest paths example 3 2 0 3 8 2 4 4 3 0 4 1 7 (2) D = 4 0 5 11 2 1 5 0 2 2 1 5 7 8 1 6 0 8 1 3 4 5 4 6 1 1 5 1 4 4 2 2 (2) Π = 3 2 2 4 3 4 1 4 4 5 Length 2

All-pairs shortest paths example 3 2 0 3 3 2 4 4 3 0 4 1 1 (3) D = 7 4 0 5 11 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 1 4 5 1 4 4 2 1 (3) Π = 4 3 2 2 4 3 4 1 4 3 4 5 Length 3

All-pairs shortest paths example 3 2 0 1 3 2 4 4 3 0 4 1 1 (4) D = 7 4 0 5 3 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 3 4 5 1 4 4 2 1 (4) Π = 4 3 2 1 4 3 4 1 4 3 4 5 Length 4

All-pairs shortest paths example 3 2 0 1 3 2 4 4 3 0 4 1 1 (4) D = 7 4 0 5 3 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 3 4 5 1 4 4 2 1 Π = 4 3 4 1 4 3 4 5 Path 12 = 1 5 4 3 2 (4) d (4) 12 = 4 + 6 + 5 + 4 4 3 2 1 = 1

All-pairs shortest paths example 3 2 0 1 3 2 4 4 3 0 4 1 1 (4) D = 7 4 0 5 3 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 3 4 5 1 4 4 2 1 Π = 4 3 4 1 4 3 4 5 Path 35 = 3 2 4 1 5 (4) d (4) 35 = 4 + 1 + 2 + 4 4 3 2 1 = 3

Matrix multiplication All-pairs shortest paths for graph G =(V (V, E). Dynamic programming: compute D ( V 1). d (m) =min{d ij k (m 1) ik + a kj }. Observe that if we make the substitutions D (m 1) a, D (0) b, D (m) c, min +, +. Problem change to compute C = A B, where C, A, and B are n n matrices: n c = a b ij ik kj k = 1

Matrix multiplication Consequently, we can compute D (1) = D (0) D (0) D (2) = D (1) D (0). D (n 1) = D (n 2) D (0) Yi ldi D (n 1) δ(i j) Ti Θ( 3 ) ( 4 ) N Yielding D (n 1) = δ(i, j). Time = Θ(n n 3 ) = (n 4 ). No better than running Bellman-Ford once from each vertex.

Improved matrix multiplication Repeated squaring: D (2k) = D (k) D (k). lg 1 2 n Compute D (2), D (4),, D. O(lgn) squarings Note: D (n 1) = D (n) = D (n + 1) =... Time = Θ(n 3 lgn). To detect negative-weight cycles, check the diagonal for negative values in O(n) additional time.

Floyd-Warshall algorithm Also dynamic programming, but faster! Define d (k) ij = weight of a shortest path from i to j with intermediate vertices belonging to the set {1, 2,, k}. Thus, δ(i, j) = d ij (n). Also, d ij (0) = w ij.

Floyd-Warshall algorithm all intermediate vertices in {1, 2,..., k 1} all intermediate vertices in {1, 2,..., k 1} k i j p: all intermediate vertices in {1, 2,..., k} w ij d (k) ij = min(dij (k 1), d (k 1) ik + d (k 1) kj ) d ij (m) = min k {d ik (m 1) + a kj }. if k = 0, if k 1.

Example of Floyd-Warshall algorithm 3 2 0 3 8 4 4 0 1 7 (0) D = 4 0 2 5 0 2 1 5 7 6 0 8 1 3 4 5 4 6 Initialization (0) 1 1 1 2 2 Π = 3 4 4 5

Example of Floyd-Warshall algorithm 3 2 0 3 8 4 4 0 1 7 (1) D = 4 0 2 5 5 0 2 2 1 5 7 6 0 8 1 3 4 5 4 6 Node 1 (1) 1 1 1 2 2 Π = 3 4 1 4 1 5

Example of Floyd-Warshall algorithm 3 2 0 3 8 4 4 4 0 1 7 (2) D = 4 0 5 11 2 5 5 0 2 2 1 5 7 6 0 8 1 3 4 5 4 6 Node 2 (2) 1 1 2 1 2 2 Π = 3 2 2 4 1 4 1 5

Example of Floyd-Warshall algorithm 3 2 0 3 8 4 4 4 0 1 7 (3) D = 4 0 5 11 2 1 5 0 2 2 1 5 7 6 0 8 1 3 4 5 4 6 Node 3 (3) 1 1 2 1 2 2 Π = 3 2 2 4 3 4 1 5

Example of Floyd-Warshall algorithm 3 2 0 3 1 4 4 4 3 0 4 1 1 (4) D = 7 4 0 5 3 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 1 4 2 1 4 4 2 4 Π = 4 3 2 4 4 3 4 1 4 4 4 5 Node 4 (4) Path 13 = 1 2 4 3 d 13 (4) = 3 + 1 + 5 = 1

Example of Floyd-Warshall algorithm 3 2 0 1 3 2 4 4 3 0 4 1 1 (5) D = 7 4 0 5 3 2 1 5 0 2 2 1 5 7 8 5 1 6 0 8 1 3 4 5 4 6 5 4 5 1 4 4 2 4 Π = 4 3 2 4 4 3 4 1 4 4 4 5 Node 5 (5)

Floyd-Warshall algorithm FLOYD-WARSHALL(W ) 1. n rows[w] 2. d 0 W 3. for k 1 to n 4. do for i 1 to n 5. do for j 1 to n 6. do d (k) ij min(d (k 1) ij, d (k 1) ik + d (k 1) kj ) 7. return D (n) Running time is Φ(n 3 )

Graph reweighting Is it any possible to change all negative-weight ihedges to nonnegative? Then, we can use Dijkstra's algorithm to compute the shortest path for all edges. Graph reweighting properties. For all pairs of vertices u,, v V,, a path p is a shortest path from u to v using weight function w if and only if p is also a shortest path from u to v using weight function ŵ after reweighting. For all edges (u, v), the new weight wuv ˆ (, ) is nonnegative.

Graph reweighting Theorem. Given a function h: V, reweight each edge (u, v) E by wuv ˆ (, ) = wuv (, ) + hu ( ) hv ( ). Then, for any two vertices, all paths between them are reweighted by the same amount. Proof. Let p = v 1 v 2... v k be a path in G. We have k wp ˆ( ) = wv ˆ( i 1, vi) i=11 k = ( wv ( i 1, vi) + hv ( i 1) hv ( i)) i=1= 1 k = + wv (, v) hv ( ) hv ( ) i 1 i 0 k i= 1 wp ( ) hv ( 0) hv ( k ) = + Same amount!

Graph reweighting 3 2 4 8 1 3 44 7 2 1 55 5 4 6

Graph reweighting 0 0 2 0 0 3 4 8 1 3 0 44 2 1 7 5 4 6 55 0 Run Bellman-Ford algorithm for vertex v 0

Graph reweighting 0 1 0 2 3 4 0 0 0 8 1 3 5 0 2 1 44 55 7 0 5 4 6 4 0 h(v k ) = δ(v 0, v k ) Run Bellman-Ford algorithm for vertex v 0

Graph reweighting 0 0 0 0 0 2 4 0 1 13 3 0 2 0 10 0 5 2 4 0 h(v k ) = δ(v 0, v k ) wv ˆ (, v ) = wv (, v ) + hv ( ) hv ( ) i j i j i j Run Dijkstra's algorithm for each v k

Graph reweighting Why is wuv ˆ (, ) nonnegative? u s w(u, v) v Triangle inequality δ(s, v) δ(s, u) + w(u, v) w(u, v) + δ(s, u) δ(s, v) 0 h(v) ( ) = δ(s, (, v), h(u) ( ) = δ(s, (, u) ) wuv ˆ (, ) = wuv (, ) + hu ( ) hv ( ) 0

Johnson's algorithm JOHNSON(G ) 1. Compute G', where V[G'] = V[G] {s}, E[G'] ] = E[G] {(s, v): v V[G]}, and w(s, v) = 0 for all v V[G] 2. if BELLMAN-FORD(G', w, s) = FALSE 3. then print "the input graph contains a negative-weight cycle" 4. else for each vertex v V[G'] 5. do set h(v) to the value of δ(s, v) 6. for each edge (u, v) E[G'] 7. do wuv ˆ (, ) wuv (, ) + hu ( ) hv ( ) 8. for each vertex u V[G] 9. do run DIJKSTRA(G, ŵ,u) to compute ˆ( δ uv, ) for all v V[G]

Johnson's algorithm 10. for each vertex v V[G] 11. do duv ˆ( δ u, v ) + h ( v ) h ( u ) 12. return D

Analysis of Johnson's algorithm 1. Run Bellman-Ford to solve the difference constraints h(v) h(u) w(u, v), or determine that a negativeweight cycle exists. Time = O(VE). 2. Run Dijkstra's algorithm for each vertex u V[G]. Time = O(VElgV). 3. For each (u, v) E[G], compute δ(u, v). Time = O(E). Total time = O(VElgV). Johnson's algorithm is particularly suitable for sparse graph.

All-pairs shortest-paths algorithms Algorithms Time Data structure Brute-force (run Bellman- Ford once from each vertex) O(V 2 E ) Adjacency-list or adjacency-matrix Dynamic programming O(V 4 ) Adjacency-matrix only Improved dynamic Programming O(V 3 lgv ) Adjacency-matrix only Floyd-Warshall algorithm O(V 3 ) Adjacency-matrix only Johnson algorithm O(VElgV) Adjacency-list or adjacency-matrix

Flow networks Hangzhou u 12 Ningbo v Shanghai s 0 1 4 7 t Hongkong x Nanjin y 14 Liangyungang

Flow networks Definition. A flow network is a directed graph G = (V, E) with two distinguished vertices: a source s and a sink t. Each edge (u, v) E has a nonnegative capacity c(u, v). ) If (u, v) ) E, then c(u, ( v) ) = 0.

Flow networks Definition. A positive flow on G is a function f: V V satisfying the following: Capacity constraint: For all u, v V, we require 0 f(u, v) ) c(u, ( v). ) Skew symmetry: For all u, v V, we require f(u, v) = f (v, u) Flow conservation:forallu all u V {s, t}, we require f( u, v) = 0 v V

A flow on a network Positive flow Capacity u 12/12 v s 0 1 1/ /4 7/ /7 t x 11/14 Flow conservation: Flow into x is 8+ 4= 12. Flow out of x is 1+ 11= 12. The value of this flow is 11 + 8 = 19. y

Maximum-flow problem Maximum-flow problem. Given a flow network G, find a flow of maximum value on G. u 12/12 v s 10 4 1/ 7 7/ t x 11/14 y The maximum flow is 23.

Cuts Definition. A cut (S, T) of a flow network G =(V, E) is a partition of V such chthat s S and t T. If f is a flow on G, then the flow across the cut is f(s, T). Maximum flow in a network is bounded by the capacity of minimum cut of the network. Why?

Cuts of flow networks u 12/12 v s 0 1 1/ /4 7/ /7 t x y 11/14 The net flow across this cut is f(u, v) + f(x, v) + f(x, y) = 12 + ( 4) + 11 = 19. and its capacity is c(u, v) + c(x, v) = 12 + 14 = 26.

Flow cancellation Without loss of generality, positive flow goes either from u to v, or from v to u, but not both. u u Net flow from u to v in both cases is 1. 3/5 2/3 1/5 0/3 v The capacity constraint and flow conservation are preserved by this transformation. v

Residual network Definition. Let f be a flow on G = (V, E). The residual network G f (V, E f ) is the graph with strictly positive residual capacities c f (u, v)=c(u = c(u, v) f (u, v) > 0. Edges in E f admit more flow. 0/1 4 G f : G: u v u v 3/5 2

Augmenting paths Definition. Any path from s to t in G f is an augmenting path in G with respect to f. The flow value can be increased along an augmenting path p by c ( p ) = min { c ( u, v)} f ( uv, p) f

Example of maximum-flow problem u 12 v s 10 x 4 14 y 7 t c f (p) =min(16, 12, 9, 14, 4) = 4 u 4/12 v s 10 4 7 t x 4/14 y

Example of maximum-flow problem u 4/12 v s 10 4 7 t x u 4/14 8 4 y v s 10 4 7 t x 10 4 y

Example of maximum-flow problem u 8 4 v s 10 x u 4 10 7 t c f (p) y =min(12, 10, 10, 7, 20) 4 = 7 8 Total v = 4 + 7 4 = 11 s 3 11 7 t x 3 11 y

Example of maximum-flow problem u 8 4 v s s 3 11 x u 11 3 7 t c f (p) 3 y =min(13, 11, 8, 13) 11 = 8 12 Total v = 4 + 7 + 8 = 19 7 t x 3 11 y

Example of maximum-flow problem u 12 v s 11 x u 3 7 c f (p) 3 y =min(5, 4, 5) 11 = 4 12 Total v = 4 + 7 + 8 + 4 = 23 t s 11 3 7 t x 3 11 y

Example of maximum-flow problem u 12 v s 11 3 7 t x u 3 y 11 12/12 = 23 v The maximum flow s 10 1/4 7/7 t x 11/14 y

Ford-Fulkerson algorithm FORD-FULKERSON(G, FULKERSON(G s, t) 1. for each edge (u, v) E[G] 2. do f [u, v] ] 0 3. f [v, u] 0 4. while there exists a path p from s to t in the residual network G f 5. do c f (p) min{c f (u, v): ) (u, v) ) is in p} } 6. for each edge (u, v) in p 7. do f [u, v] ] f [u, v] ]+ c f (p) 8. f [v, u] f [v, u]

Analysis of Ford-Fulkerson algorithm Find a path in a residual netork is O(V + E) if we use either depth-first search or breadth-first search. f * denote the maximum flow found by the algorithm. Running time of Ford-Fulkerson algorithm is O(E f * ).

Ford-Fulkerson max-flow algorithm u u s 1 t s 0/1 t v v

Ford-Fulkerson max-flow algorithm u u s 0/1 t s 1/1 t v v

Ford-Fulkerson max-flow algorithm u u s 1/1 t s 0/1 t v v

Ford-Fulkerson max-flow algorithm u u s 0/1 t s 1/1 t v v

Ford-Fulkerson max-flow algorithm u u s 1/1 t s 0/1 t v v Running time is 10,000.

Edmonds-Karp algorithm Edmonds and Karp noticed that many people's implementations of Ford-Fulkerson augment along a breadth-first augmenting path: a shortest path in G f from s to t where each edge has weight 1. These implementations would always run relatively fast. Since a breadth-first augmenting path can be found in O(V + E) time, their analysis, which provided the first polynomial-time bound on maximum flow, focuses on bounding the number of flow augmentations. Edmonds-Karp algorithm's running time is O(VE 2 ).

Any yquestion? Xiaoqing i Zheng Fundan University