Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Similar documents
Design and Analysis of Algorithms

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Slides credited from Hsueh-I Lu & Hsu-Chun Hsiao

Single Source Shortest Paths

Shortest Paths. CS 320, Fall Dr. Geri Georg, Instructor 320 ShortestPaths 3

CMPS 6610 Fall 2018 Shortest Paths Carola Wenk

Single Source Shortest Paths

CMPS 2200 Fall Carola Wenk Slides courtesy of Charles Leiserson with small changes by Carola Wenk. 10/8/12 CMPS 2200 Intro.

Introduction to Algorithms

Introduction to Algorithms

Introduction to Algorithms

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Introduction to Algorithms

Shortest Path Algorithms

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

BFS Dijkstra. Oct abhi shelat

CS60020: Foundations of Algorithm Design and Machine Learning. Sourangshu Bhattacharya

Introduction to Algorithms

CS 253: Algorithms. Chapter 24. Shortest Paths. Credit: Dr. George Bebis

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Dynamic Programming: Shortest Paths and DFA to Reg Exps

CS 6301 PROGRAMMING AND DATA STRUCTURE II Dept of CSE/IT UNIT V GRAPHS

Myriad of applications

Dijkstra s Single Source Shortest Path Algorithm. Andreas Klappenecker

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Design and Analysis of Algorithms

Algorithm Design and Analysis

Analysis of Algorithms. Outline 1 Introduction Basic Definitions Ordered Trees. Fibonacci Heaps. Andres Mendez-Vazquez. October 29, Notes.

A faster algorithm for the single source shortest path problem with few distinct positive lengths

Breadth First Search, Dijkstra s Algorithm for Shortest Paths

Algorithms Booklet. 1 Graph Searching. 1.1 Breadth First Search

Query Processing in Spatial Network Databases

Lecture Notes for Chapter 25: All-Pairs Shortest Paths

Data Structures and and Algorithm Xiaoqing Zheng

Discrete Optimization 2010 Lecture 3 Maximum Flows

Breadth-First Search of Graphs

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Trees. A tree is a graph which is. (a) Connected and. (b) has no cycles (acyclic).

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Graph Transformations T1 and T2

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

COMP251: Bipartite graphs

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Algorithms: Lecture 12. Chalmers University of Technology

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

CSE 332. Data Abstractions

Running Time. Assumption. All capacities are integers between 1 and C.

Maximum Flow. Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Algorithm Design and Analysis

Branch-and-Bound for the Travelling Salesman Problem

Matching Residents to Hospitals

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

COMP251: Bipartite graphs

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

All-Pairs Shortest Paths

Theory of Computation

ne a priority queue for the nodes of G; ile (! PQ.is empty( )) select u PQ with d(u) minimal and remove it; I Informatik 1 Kurt Mehlhorn

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

IS 709/809: Computational Methods in IS Research Fall Exam Review

CSE 591 Foundations of Algorithms Homework 4 Sample Solution Outlines. Problem 1

ICS 252 Introduction to Computer Design

MA015: Graph Algorithms

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Chapter 5 Data Structures Algorithm Theory WS 2017/18 Fabian Kuhn

25. Minimum Spanning Trees

25. Minimum Spanning Trees

Cographs; chordal graphs and tree decompositions

Math 391: Midterm 1.0 Spring 2016

Math Models of OR: Branch-and-Bound

8 Priority Queues. 8 Priority Queues. Prim s Minimum Spanning Tree Algorithm. Dijkstra s Shortest Path Algorithm

Fundamental Algorithms 11

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

CMPSCI 611: Advanced Algorithms

Algorithm Design and Analysis

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Matroids and Greedy Algorithms Date: 10/31/16

CS 241 Analysis of Algorithms

Min/Max-Poly Weighting Schemes and the NL vs UL Problem

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

CMU Lecture 4: Informed Search. Teacher: Gianni A. Di Caro

Lecture notes for Analysis of Algorithms : Markov decision processes

Problem One: Order Relations i. What three properties does a binary relation have to have to be a partial order?

CSE 202 Homework 4 Matthias Springer, A

Partha Sarathi Mandal

CS 580: Algorithm Design and Analysis

CS 410/584, Algorithm Design & Analysis, Lecture Notes 4

Shortest paths with negative lengths

Algorithm Design and Analysis

An 1.75 approximation algorithm for the leaf-to-leaf tree augmentation problem

NP-complete problems. CSE 101: Design and Analysis of Algorithms Lecture 20

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

L feb abhi shelat

Theoretical Computer Science. Partially dynamic efficient algorithms for distributed shortest paths

Examination paper for TDT4120 Algorithms and Data Structures

Heaps Induction. Heaps. Heaps. Tirgul 6

Lecture 2: Divide and conquer and Dynamic programming

Transcription:

Analysis of Algorithms Single Source Shortest Path Andres Mendez-Vazquez November 9, 01 1 / 108 Outline 1 Introduction Introduction and Similar Problems General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford Directed Acyclic Graphs (DAG) Relaxing Edges Example Dijkstra s Algorithm Dijkstra s Algorithm: A Greedy Method Example Correctness Dijkstra s algorithm Complexity of Dijkstra s Algorithm Exercises / 108

Introduction Problem description Thus Given a single source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). The algorithm will compute a shortest path tree (again, similar to BFS). / 108 Similar Problems Single destination shortest paths problem Find a shortest path to a given destination vertex t from each vertex. By reversing the direction of each edge in the graph, we can reduce this problem to a single source problem. Reverse Directions / 108

Similar Problems Single pair shortest path problem Find a shortest path from u to v for given vertices u and v. If we solve the single source problem with source vertex u, we also solve this problem. / 108 Similar Problems All pairs shortest paths problem Find a shortest path from u to v for every pair of vertices u and v. / 108

Optimal Substructure Property Lemma.1 Given a weighted, directed graph G = (V, E) with p =< v 1, v,..., v k > be a Shortest Path from v 1 to v k. Then, p ij =< v i, v i+1,..., v j > is a Shortest Path from v i to v j, where 1 i j k. We have then So, we have the optimal substructure property. Bellman-Ford s algorithm uses dynamic programming. Dijkstra s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108 Optimal Substructure Property Corollary Let p be a Shortest Path from s to v, where p = s p 1 u v = p 1 {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v). 10 / 108

The Lower Bound Between Nodes Lemma.10 Let s V. For all edges (u, v) E, we have δ(s, v) δ(s, u) + w(u, v). 11 / 108 Now Then We have the basic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 1 / 108

Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph G π = (V π, E π ) where V π = {v V v.π NIL} {s} E π = {(v.π, v) v V π {s}} Properties The predecessor subgraph G π forms a depth first forest composed of several depth first trees. The edges in E π are called tree edges. 1 / 108 The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 1 / 108

Initialize and Relaxation The Algorithms keep track of v.d, v.π. It is initialized as follows Initialize(G, s) 1 for each v V [G] v.d = v.π = NIL s.d = 0 These values are changed when an edge (u, v) is relaxed. Relax(u, v, w) 1 if v.d > u.d + w(u, v) v.d = u.d + w(u, v) v.π = u 1 / 108 How are these functions used? These functions are used 1 Build a predecesor graph G π. Integrate the Shortest Path into that predecessor graph. 1 Using the field d. 18 / 108

The Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will detect reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) for i = 1 to V [G] 1 for each (u, v) to E [G] Relax(u, v, w) for each (u, v) to E [G] if v.d > u.d + w (u, v) return false 8 return true Time Complexity O (VE) 0 / 108 Properties of Relaxation Some properties v.d, if not, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = is an invariant. / 108

Properties of Relaxation Lemma.10 (Triangle inequality) Let G = (V, E) be a weighted, directed graph with weight function w : E R and source vertex s. Then, for all edges (u, v) E, we have: δ (s, v) δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. Then, p has no more weight than any other path from s to vertex v. Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). / 108 Properties of Relaxation Lemma.11 (Upper Bound Property) Let G = (V, E) be a weighted, directed graph with weight function w : E R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d δ(s, v) v V [G], and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. / 108

Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d δ (s, v) for all v V For the Basis v.d δ (s, v) is true after initialization, since: v.d = making v.d δ (s, v) for all v V {s}. For s, s.d = 0 δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d δ (s, x) for all x V prior to relaxation. / 108 Thus If you call Relax(u, v, w), it may change v.d v.d = u.d + w(u, v) δ(s, u) + w(u, v) by inductive hypothesis δ (s, v) by the triangle inequality Thus, the invariant is maintained. / 108

Properties of Relaxation Proof of lemma.11 cont... To proof that the value v.d never changes once v.d = δ (s, v): Note the following: Once v.d = δ (s, v), it cannot decrease because v.d δ (s, v) and Relaxation never increases d. / 108 Next, we have Corollary.1 (No-path property) If there is no path from s to v, then v.d = δ(s, v) = is an invariant. Proof By the upper-bound property, we always have = δ (s, v) v.d. Then, v.d =. 8 / 108

More Lemmas Lemma.1 Let G = (V, E) be a weighted, directed graph with weight function w : E R. Then, immediately after relaxing edge (u, v) by calling Relax(u, v, w) we have v.d u.d + w(u, v). 9 / 108 Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case If v.d u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d u.d + w (u, v) 0 / 108

More Lemmas Lemma.1 (Convergence property) Proof: Let p be a shortest path from s to v, where p = p 1 s u v = p 1 {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 1 / 108 Proof Thus, after relaxing (u, v) v.d u.d + w(u, v) by lemma.1 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma.1 Now By lemma.11, v.d δ(s, v), so v.d = δ(s, v). / 108

Predecessor Subgraph for Bellman Lemma.1 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph G π is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 G π is acyclic. There exists a unique path from source s to each vertex V π. / 108 Proof of G π is acyclic First Suppose there exist a cycle c =< v 0, v 1,..., v k >, where v 0 = v k. We have v i.π = v i 1 for i = 1,,..., k. Second Assume relaxation of (v k 1, v k ) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) / 108

Proof First Each vertex on the cycle has a non-nil predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-nil value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. / 108 Proof Before call to Relax(v k 1, v k, w): Thus This Implies v i.d was last updated by v i.π = v i 1 for i = 1,..., k 1. () for i = 1,..., k 1 (Because Relax updates π). This implies This implies v i.d = v i 1.d + w(v i 1, v i ) () v i.d v i 1.d + w(v i 1, v i ) () for i = 1,..., k 1 (Before Relaxation in Lemma.1). / 108

Proof Thus Because v k.π is changed by call Relax (Immediately before), v k.d > v k 1.d + w(v k 1, v k ), we have that: k k v i.d > (v i 1.d + w (v i 1, v i )) i=1 i=1 k k = v i 1.d + w (v i 1, v i ) i=1 i=1 We have finally that Because k i=1 v i.d = k v i 1.d, we have that i=1 negative weight cycle!!! k w(v i 1, v i ) < 0, i.e., a i=1 8 / 108 Some comments Comments v i.d v i 1.d + w(v i 1, v i ) for i = 1,..., k 1 because when Relax(v i 1, v i, w) was called, there was an equality, and v i 1.d may have gotten smaller by further calls to Relax. v k.d > v k 1.d + w(v k 1, v k ) before the last call to Relax because that last call changed v k.d. 9 / 108

Proof of existence of a unique path from source s Let G π be the predecessor subgraph. So, for any v in V π, the graph G π contains at least one path from s to v. Assume now that you have two paths: x s u z v y Impossible! This can only be possible if for two nodes x and y x y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and G π is a tree. 0 / 108 Lemma.1 Lemma.1 Same conditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V. Then G π is a shortest path tree rooted at s. Proof For all v in V, there is a unique simple path p from s to v in G π (Lemma.1). We want to prove that it is a shortest path from s to v in G. / 108

Proof Then Let p =< v 0, v 1,..., v k >, where v 0 = s and v k = v. Thus, we have v i.d = δ(s, v i ). And reasoning as before This implies that v i.d v i 1.d + w(v i 1, v i ) () w(v i 1, v i ) δ(s, v i ) δ(s, v i 1 ) () / 108 Proof Then, we sum over all weights k w (p) = w (v i 1, v i ) i=1 k (δ (s, v i ) δ(s, v i 1 )) i=1 = δ(s, v k ) δ(s, v 0 ) = δ(s, v k ) Finally So, equality holds and p is a shortest path because δ (s, v k ) w (p). / 108

Again the Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will detect reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) for i = 1 to V [G] 1 for each (u, v) to E [G] Relax(u, v, w) The Decision Part of the Dynamic Programming for u.d and u.π. for each (u, v) to E [G] if v.d > u.d + w (u, v) return false 8 return true Observation If Bellman-Ford has not converged after V (G) 1 iterations, then there cannot be a shortest path tree, so there must be a negative / 108 weight cycle. Example Red Arrows are the representation of v.π s b a - - 10 f - c e d - 1 h g / 108

Example Here, whenever we have v.d = and v.u = no change is done s b a - - c e d 10-1 - h f g 8 / 108 Example Here, we keep updating in the second iteration s b a 10 - - f 1 - c 1 e d - 1 h g 9 / 108

Example Here, during it. we notice that e can be updated for a better value s b a 10 - - - c 1 e d 1 - h g f 1 0 / 108 Example Here, we keep updating in third iteration and d and g also is updated s b a - - c 1 e d - 10-1 h g f 1 1 / 108

Example Here, we keep updating in fourth iteration s b a 10 - - f 1 - c 1 e d - 1 h g / 108 Example Here, f is updated during this iteration s b a - 10 - f 1 - c 1 e d 0-1 h g / 108

v.d == δ(s, v) upon termination Lemma. Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v 0, v 1,..., v k >, where v 0 = s and v k = v. We know the following Shortest paths are simple, p has at most V 1, thus we have that k V 1. / 108 Proof Something Notable Claim: v i.d = δ(s, v i ) holds after the ith pass over edges. In the algorithm Each of the V 1 iterations of the for loop (Lines -) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1,,..., k, is (v i 1, v i ). By lemma.11, once v i.d = δ(s, v i ) holds, it continues to hold. / 108

Finding a path between s and v Corollary. Let G = (V, E) be a weighted, directed graph with source vertex s and weight function w : E R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex v V, there is a path from s to v if and only if Bellman-Ford terminates with v.d < when it is run on G Proof Left to you... / 108 Correctness of Bellman-Ford Claim: The Algorithm returns the correct value Part of Theorem.. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma. (last slide) if v is reachable or v.d = δ(s, v) = otherwise. 9 / 108

Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) δ(s, u) + w(u, v) u.d + w (u, v) Remember:. for each (u, v) to E [G]. if v.d > u.d + w (u, v). return false Thus So algorithm returns true. 0 / 108 Correctness of Bellman-Ford Case : There exists a reachable negative weight cycle c =< v 0, v 1,..., v k >, where v 0 = v k. Then, we have: Suppose algorithm returns true k w(v i 1, v i ) < 0. () i=1 Then v i.d v i 1.d + w(v i 1, v i ) for i = 1,..., k because Relax did not change any v i.d. 1 / 108

Correctness of Bellman-Ford Thus k k v i.d (v i 1.d + w(v i 1, v i )) i=1 i=1 k k = v i 1.d + w(v i 1, v i ) i=1 i=1 Since v 0 = v k Each vertex in c appears exactly once in each of the summations, and k i=1 v i 1.d, thus k v i.d i=1 k k v i.d = v i 1.d (8) i=1 i=1 / 108 Correctness of Bellman-Ford By Corollary. v i.d is finite for i = 1,,..., k, thus 0 Hence k w(v i 1, v i ). i=1 This contradicts (Eq. ). Thus, algorithm returns false. / 108

Another Example Something Notable By relaxing the edges of a weighted DAG G = (V, E) according to a topological sort of its vertices, we can compute shortest paths from a single source in time. Why? Shortest paths are always well defined in a DAG, since even if there are negative-weight edges, no negative-weight cycles can exist. / 108 Single-source Shortest Paths in Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G Initialize(G, s) for each u in V [G] in topological sorted order for each v to Adj [u] Relax(u, v, w) / 108

It is based in the following theorem Theorem. If a weighted, directed graph G = (V, E) has source vertex s and no cycles, then at the termination of the DAG-SHORTEST-PATHS procedure, v.d = δ (s, v) for all vertices v V, and the predecessor subgraph G π is a shortest path. Proof Left to you... / 108 Complexity We have that 1 Line 1 takes Θ (V + E). Line takes Θ (V ). Lines - makes an iteration per vertex: 1 In addition, the for loop in lines - relaxes each edge exactly once (Remember the sorting). Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 8 / 108

Example After Initialization, we have b is the source 1 a b c d e f -1-0 / 108 Example a is the first in the topological sort, but no update is done 1 a b c d e f -1-1 / 108

Example b is the next one 1 a b c d e f -1 - / 108 Example c is the next one 1 a b c d e f -1 - / 108

Example d is the next one 1 a b c d e f -1 - / 108 Example e is the next one 1 a b c d e f -1 - / 108

Example Finally, w is the next one 1 a b c d e f -1 - / 108 Dijkstra s Algorithm It is a greedy based method Ideas? Yes We need to keep track of the greedy choice!!! 8 / 108

Dijkstra s Algorithm Assume no negative weight edges 1 Dijkstra s algorithm maintains a set S of vertices whose shortest path from s has been determined. It repeatedly selects u in V S with minimum shortest path estimate (greedy choice). It store V S in priority queue Q. 9 / 108 Dijkstra s algorithm DIJKSTRA(G, w, s) 1 INITIALIZE(G, s) S = Q = V [G] while Q u =Extract-Min(Q) S = S {u} for each vertex v Adj [u] 8 Relax(u,v,w) 80 / 108

Example The Graph After Initialization s b d a c e h g f 1 10 8 / 108 Example We use red edges to represent v.π and color black to represent the set S s b d a c e h g f 1 10 8 / 108

Example s Extract-Min(Q) and update the elements adjacent to s s b d a c e h g f 1 10 0 8 / 108 Example a Extract-Min(Q) and update the elements adjacent to a s b d a c e h g f 1 10 0 9 8 / 108

Example e Extract-Min(Q) and update the elements adjacent to e s b d a c e h g f 1 10 0 9 8 / 108 Example b Extract-Min(Q) and update the elements adjacent to b s b d a c e h g f 1 10 0 9 1 8 / 108

Example h Extract-Min(Q) and update the elements adjacent to h s b d a c e h g f 1 10 0 9 1 9 10 88 / 108 Example c Extract-Min(Q) and no-update s b d a c e h g f 1 10 0 9 1 9 10 89 / 108

Example d Extract-Min(Q) and no-update s b d a c e h g f 1 10 0 9 1 9 10 90 / 108 Example g Extract-Min(Q) and no-update s b d a c e h g f 1 10 0 9 1 9 10 91 / 108

Example f Extract-Min(Q) and no-update s b 0 a 9 d c e 1 9 10 g h 10 f 1 9 / 108 Correctness Dijkstra s algorithm Theorem. Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 8, v.d = δ (s, v) for each vertex v S. 9 / 108

Proof Thus We are going to prove for each u in V, u.d = δ(s, u) when u is inserted in S. Initialization Initially S =, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u s. In addition, we have that S before u is added. 9 / 108 Proof Use contradiction Now, suppose not. Let u be the first vertex such that u.d δ (s,u) when inserted in S. Note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u s; thus S just before u is inserted (in fact s S). 9 / 108

Proof Now Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = by corollary.1. If there is no path from s to v, then v.d = δ(s, v) = is an invariant. Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V S, namely u. 9 / 108 Proof Consider the following The first y along p from s to u such that y V S. And let x S be y s predecessor along p. 98 / 108

Proof Proof (continuation) Then, shortest path from s to u: s p 1 x y p u looks like... S u s x y Remark: Either of paths p 1 or p may have no edges. 99 / 108 Proof We claim y.d = δ(s, y) when u is added into S. Proof of the claim 1 Observe that x S. In addition, we know that u is the first vertex for which u.d δ (s, u) when it id added to S 100 / 108

Proof Then In addition, we had that x.d = δ(s, x) when x was inserted into S. Then, we relaxed the edge between x and y Edge (x, y) was relaxed at that time! 101 / 108 Proof Remember? Convergence property (Lemma.1) Let p be a shortest path from s to v, where p = p 1 s u v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 10 / 108

Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). y appears before u in a shortest path on a shortest path from s to u. In addition, all edges have positive weights. Then, δ(s, y) δ(s, u), thus y.d = δ (s, y) δ (s, u) u.d The last inequality is due to the Upper-Bound Property (Lemma.11). 10 / 108 Proof Then But because both vertices u and y where in V S when u was chosen in line u.d y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 10 / 108

Finally Termination At termination Q = Thus, V S = or equivalent S = V Thus u.d = δ (s, u) for all vertices u V!!! 10 / 108 Complexity Running time is O(V ) using linear array for priority queue. O((V + E) log V ) using binary heap. O(V log V + E) using Fibonacci heap. 10 / 108

Exercises From Cormen s book solve.1-1.1-.1-.-1.-.-.-.-.-8.-10 108 / 108