Design and Analysis of Algorithms

Similar documents
Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Algorithm Design and Analysis

Greedy Algorithms. Kleinberg and Tardos, Chapter 4

CSE 421 Greedy Algorithms / Interval Scheduling

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 417. Chapter 4: Greedy Algorithms. Many Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

6. DYNAMIC PROGRAMMING I

Dynamic Programming: Interval Scheduling and Knapsack

CS 580: Algorithm Design and Analysis

Algorithm Design and Analysis

CSE101: Design and Analysis of Algorithms. Ragesh Jaiswal, CSE, UCSD

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 9

CS 580: Algorithm Design and Analysis

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

Proof methods and greedy algorithms

Dynamic Programming. Cormen et. al. IV 15

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

CSEP 521 Applied Algorithms. Richard Anderson Winter 2013 Lecture 1

CSE 421 Dynamic Programming

CSE 202 Dynamic Programming II

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

Greedy Algorithms. CSE 101: Design and Analysis of Algorithms Lecture 10

CSE 421 Introduction to Algorithms Final Exam Winter 2005

6. DYNAMIC PROGRAMMING I

Artificial Intelligence Heuristic Search Methods

Dynamic Programming. Data Structures and Algorithms Andrei Bulatov

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Algorithms: Lecture 12. Chalmers University of Technology

CS 580: Algorithm Design and Analysis

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Copyright 2000, Kevin Wayne 1

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

Algorithm Design Strategies V

The max flow problem. Ford-Fulkerson method. A cut. Lemma Corollary Max Flow Min Cut Theorem. Max Flow Min Cut Theorem

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

CS 6901 (Applied Algorithms) Lecture 3

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Algorithms Exam TIN093 /DIT602

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Announcements. CompSci 102 Discrete Math for Computer Science. Chap. 3.1 Algorithms. Specifying Algorithms

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Algorithms and Theory of Computation. Lecture 11: Network Flow

Single Solution-based Metaheuristics

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

Lecture 2: Divide and conquer and Dynamic programming

Simulated Annealing. Local Search. Cost function. Solution space

University of Washington March 21, 2013 Department of Computer Science and Engineering CSEP 521, Winter Exam Solution, Monday, March 18, 2013

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

6 Markov Chain Monte Carlo (MCMC)

Hill climbing: Simulated annealing and Tabu search

Methods for finding optimal configurations

Algorithms. Outline! Approximation Algorithms. The class APX. The intelligence behind the hardware. ! Based on

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

This means that we can assume each list ) is

CSC 373: Algorithm Design and Analysis Lecture 12

1 Basic Definitions. 2 Proof By Contradiction. 3 Exchange Argument

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Greedy Algorithms My T. UF

Chapter 8. NP and Computational Intractability

Agenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 421 Weighted Interval Scheduling, Knapsack, RNA Secondary Structure

ECS122A Handout on NP-Completeness March 12, 2018

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

APTAS for Bin Packing

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

3.4 Relaxations and bounds

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Counting Strategies: Inclusion/Exclusion, Categories

CS781 Lecture 3 January 27, 2011

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Lecture slides by Kevin Wayne

Chapter 8. NP and Computational Intractability. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Algorithm Design and Analysis

Review Questions, Final Exam

8.5 Sequencing Problems. Chapter 8. NP and Computational Intractability. Hamiltonian Cycle. Hamiltonian Cycle

Local Search & Optimization

Metaheuristics and Local Search

CS261: Problem Set #3

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

6. DYNAMIC PROGRAMMING I

Introduction to Simulated Annealing 22c:145

Unit 1A: Computational Complexity

Week Cuts, Branch & Bound, and Lagrangean Relaxation

1 Some loose ends from last time

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

Optimization Methods via Simulation

Lin-Kernighan Heuristic. Simulated Annealing

Matching Residents to Hospitals

Distributed Optimization. Song Chong EE, KAIST

Methods for finding optimal configurations

EECS 477: Introduction to algorithms. Lecture 12

CMPSCI 611: Advanced Algorithms

1 Ordinary Load Balancing

Transcription:

CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/

Followup on IGO, Annealing

Iterative Global Optimization S = universe of solutions // aka solution space cost or objective function N(s) = neighborhood of a given solution s S Iterative Global Optimization start with an initial solution s 0 for i = to M // M = time limit, stop criterion, etc. generate candidate solution s N(s i- ) decide between s i = s i- or s i = s return s M // where you are == s M // best so far == best over s 0,, s M

Local, Global Minima

Simulated Annealing (SA) Kirkpatrick, Gelatt, Vecchi, Science (983): One of the most cited scientific papers ever SA is one of many metaheuristics that are used to deal with instances of intractable (NP-hard) combinatorial problems Genetic algorithms (Holland, U. Michigan) Tabu search (Glover, U. Colorado) Etc. Combinatorial optimization has a physical analogy to the annealing (slow cooling) of metals to produce a perfectly-ordered, minimum-energy state: a state is a solution, energy is cost, etc.

Simulated Annealing Basic Idea Initialize Start with a random initial solution. Initialize high temperature = a parameter, T Step : Move Perturb current solution to obtain a neighbor solution Step 3: Calculate cost change calculate the change in solution cost due to the move (minimization: negative change is better, positive change is worse) Step 4: Accept/Reject Depending on the cost change, accept or reject the move. Probability of acceptance depends on current temperature. Step 5: Update Update temperature, current solution. Go to Step. Continue until termination condition ( freezing or quenching ) is satisfied

SA Pseudocode http://www.ecs.umass.edu/ece/labs/vlsicad/ece665/slides/simulatedannealing.ppt Algorithm SIMULATED-ANNEALING Begin temp = INIT-TEMP; currentsol = INIT-SOLUTION; for i = to M candidatesol = NEIGHBOR(currentSol); ΔC = COST(candidateSol) COST(currentSol); if (ΔC < 0) then currentsol = candidatesol; else with Pr = e -(ΔC/temp) ) currentsol = candidatesol; temp = SCHEDULE(temp); End What happens when temp = +? What happens when temp = 0?

Simulated Annealing Facts Initial state SA chooses uphill move with nonzero probability ( hill-climbing ) Greed gets stuck here, in a local optimum SA converges to global opt solution with Pr = (in limit of infinite time, infinitely slow cooling) Fact. NEIGHBOR(solution) defines a topology over all solutions in the solution space Fact. At a fixed value of temp, SA behavior corresponds to a homogeneous Markov chain Fixed temp fixed matrix of transition probabilities between states

Simulated Annealing Facts Initial state SA chooses uphill move with nonzero probability ( hill-climbing ) Greed gets stuck here, in a local optimum SA converges to global opt solution with Pr = (in limit of infinite time, infinitely slow cooling) Fact 3. The steady-state (= equilibrium) probability of the Markov chain being in state A is proportional to e (-cost(a)/temp) When temp 0, exponentially more likely to be in the global optimum state SA is optimal (in the limit of infinite time ) Of course, we spend only a finite amount of time (#moves) at any temperature value Is cooling the best strategy with finite time? See Boese/Kahng, 993

Optimal SA Temperature Schedules 6-city Traveling Salesperson instance M = 60 steps Where-You-Are (top) Best-So-Far (bottom) (Optimal temperature schedules can be found by DP)

Optimal SA Temperature Schedules 8-vertex Graph Bisection instance M = 60 steps Where-You-Are (top) Best-So-Far (bottom) (Optimal temperature schedules can be found by DP)

About Final Exam

Final Exam: Tuesday 3/0 7pm-0pm Outline of exam ~6 questions, ~60 points total ~30 points are mechanical : flow, LP, short answer/ttk, ~0 points are algorithm design questions: Greed, DP ~0 points on NP-Completeness reduction Google Doc with advice exists: linked from Piazza @87 Review sessions on Friday 7pm and Saturday 3pm Usual Friday 5pm WLH 00 session: canceled this week Draft seating chart (Peterson 08 and Center 6) has been posted. Errors brought to Heitor s attention by Friday :59PM will be corrected in the final seating chart.

Short Answers, TTK

Flavor of Short Answers Past exams (MT, Final) TTK (linked at top of course homepage) is being refurbished but is still generally useful. Very simple T/F: T/F: Algorithm A has runtime that satisfies the recurrence T A (n) = 4T A (n/) + O(). Algorithm B s run has runtime that satisfies the recurrence T B (n) = T B (n/4) + O(). Algorithm A is asymptotically faster than Algorithm B. T/F: If a given flow network has a directed cycle, then the Ford-Fulkerson algorithm will not necessarily find a maximum s-t flow. T/F: NP-Hard problems are a subset of NP problems for which no polynomial-time solution is known. T/F: Any instance of the maximum flow problem can be reformulated as an instance of linear programming. T/F: Given that problem Y is NP-complete, and that problem X is in NP and reduces (polynomially) to Y, then X is also NP-complete.

NP-Completeness Reduction

NP-Completeness Reduction Example TSP(G,k): Given an edge-weighted undirected graph G and a number k, does there exist a TSP tour in G with cost k? TSP-Extension(G,P,k): Given an edge-weighted undirected graph G, a path P in G, and a number k, can P be extended to a complete TSP tour in G that has cost k? Problem: Give a poly-time reduction of TSP(G,k) to TSP- Extension(G,P,k).

Maximum Flow / Execution of Ford-Fulkerson

Ford-Fulkerson Example (~Lecture ) () Draw final residual graph; () write down the value of the maximum flow; (3) write down the minimum cut after executing Ford- Fulkerson in the network below, with: Augmenting Path : S-A-D-T ( units of flow) Augmenting Path : S-C-E-T ( unit of flow) Augmenting Path 3: S-B-D-T ( units of flow) A D 3 5 S 0 B T 4 C 6 G E

Greed and DP

Greedy Analysis/Proof Strategies Greedy stays ahead. Show that after each step of the greedy algorithm, its solution is at least as good as any other algorithm s. Interval Scheduling in Lecture 8 Structural. Discover a simple structural bound asserting that every possible solution must have a certain value. Then show that your algorithm always achieves this bound. Interval Partitioning in Lecture 8 Exchange argument. Gradually transform any optimal solution to the one found by the greedy algorithm without hurting its quality.

Scheduling to Minimizing Lateness Treatment from Kleinberg-Tardos text, Prof. Kevin Wayne slides Minimizing maximum lateness problem Single resource processes one job at a time Job j requires t j units of processing time and is due at time d j If job j starts at time s j, it finishes at time f j = s j + t j Lateness: j = max { 0, f j -d j } Goal: schedule all jobs to minimize maximum lateness L = max j 3 4 5 6 Example: t j 3 4 3 d j 6 8 9 9 4 5 Job scheduled order: 3,, 6,, 5, 4 Lateness check: lateness = lateness = 0 max lateness = 6 d 3 = 9 d = 8 d 6 = 5 d = 6 d 5 = 4 d 4 = 9 0 3 4 5 6 7 8 9 0 3 4 5

Minimizing Lateness: Greedy Algorithms Greedy template. Consider jobs in some order. [Shortest processing time first] Consider jobs in ascending order of processing time t j t j 0 counterexample d j 00 0 [Smallest slack] Consider jobs in ascending order of slack d j t j t j d j 0 0 counterexample [Earliest deadline first] Consider jobs in ascending order of deadline d j

Minimizing Lateness: Greedy Algorithm Greedy algorithm: Earliest deadline first Sort n jobs by deadline so that d d d n t 0 for j = to n Assign job j to interval [t, t + t j ] s j t, f j t + t j t t + t j output intervals [s j, f j ] 3 4 5 6 t j 3 4 3 d j 6 8 9 9 4 5 max lateness = d = 6 d = 8 d 3 = 9 d 4 = 9 d 5 = 4 d 6 = 5 0 3 4 5 6 7 8 9 0 3 4 5

Minimizing Lateness: No Idle Time Observation: There exists an optimal schedule with no idle time. d = 4 d = 6 0 3 4 5 6 d = 7 8 9 0 d = 4 d = 6 d = 0 3 4 5 6 7 8 9 0 Observation: The greedy schedule has no idle time.

Minimizing Lateness: Inversions Definition: Given a schedule S, an inversion is a pair of jobs i and j such that: d i < d j but j scheduled before i. inversion f i before swap j i [ as before, we assume jobs are numbered so that d d d n ] Observation: The greedy schedule has no inversions and no idle time.

Minimizing Lateness: Inversions Definition: Given a schedule S, an inversion is a pair of jobs i and j such that: i < j but j scheduled before i. inversion f i before swap j i after swap i j Claim: Swapping two consecutive, inverted jobs reduces the number of inversions by one and does not increase the max lateness. Proof: Let be the lateness before the swap, and let ' be it afterwards. ' k = k for all k i, j ' i i If job j is late: j f j d j f' j (definition) f i d j ( j finishes at time f i ) f i d i (i j) i (definition)

(Small Lemma) Claim: All schedules with no inversion and no idle time have the same maximum lateness. Proof: Two such schedules must differ in the order in which jobs with identical deadlines are scheduled. Jobs with this same deadline are scheduled consecutively The last of these jobs have the largest lateness, independent of the order of these jobs

Minimizing Lateness: Greed is Optimal Theorem: The Greedy schedule S is optimal. Proof. Define S* to be an optimal schedule that has the fewest number of inversions. Can assume S* has no idle time. If S* has no inversions, then S = S* Greedy schedule S also has no inversions and no idle time If S* has an inversion, let i-j be an adjacent inversion. Swapping i and j does not increase the maximum lateness and strictly decreases the number of inversions This contradicts definition of S*

Job Scheduling With Deadlines and Profits Given n jobs, each of which takes unit time Each job has a profit g i and a deadline d i We want to schedule jobs on a single processor so as to maximize profit There are n available time slots can think of slot t beginning at time t and ending at time t Job i : 3 4 Deadline d i : 3 3 when job must finish by Profit g i : 9 7 7 jobs sorted in order of profit

Job Scheduling With Deadlines and Profits Given n jobs, each of which takes unit time Each job has a profit g i and a deadline d i We want to schedule jobs on a single processor so as to maximize profit Greedy algorithm: Sort jobs by profit: g g g n Initial S(t) = 0 for t =..n // S(t) is the job scheduled in slot t For i =..n Schedule job i in the latest possible free slot meeting its deadline If there is no such slot, do not schedule job i t 3 4 S(t) 3 0 Total profit = 7 + 7 + 9 = 3

Job Scheduling With Deadlines and Profits Greedy algorithm: Sort jobs by profit: g g g n Initialize S(t) = 0 for t =..n // S(t) is the job scheduled in slot t For i =..n Schedule job i in the latest possible free slot meeting its deadline If there is no such slot, do not schedule job I Definition: A feasible schedule is promising after stage i if it can be extended to an optimal feasible schedule by adding only jobs from {i+,, n}. Let S i be that value of S after i stages of the greedy algorithm. Key claim: S i is promising after stage i, for every 0 i n. // which type of Greedy proof is this? t 3 4 S(t) 3 0 Total profit = 7 + 7 + 9 = 3

DP: Weighted Interval Scheduling Treatment from Kleinberg-Tardos text, Prof. Kevin Wayne slides

Weighted Interval Scheduling Job j starts at s j, finishes at f j, and has weight (value) v j Two jobs are compatible if they don t overlap Goal: find maximum-weight subset of mutually compatible jobs a b c d e f g h 0 3 4 5 6 7 8 9 0 Time

Lecture 8: Unweighted Interval Scheduling We saw that Greedy algorithm is optimal if all weights are Consider jobs in ascending order of finish time Add job to subset if it is compatible with previously chosen jobs Observation: Greedy algorithm can fail if arbitrary weights are allowed weight = 999 weight = a b 0 3 4 5 6 7 8 9 0 Time

Weighted Interval Scheduling Notation. Label jobs by finishing time: f f... f n Def. p(j) = largest index i < j such that job i is compatible with j Ex: p(8) = 5, p(7) = 3, p() = 0. 3 4 5 6 7 0 3 4 5 6 7 8 9 0 8 Time

Dynamic Programming: Binary Choice Notation. OPT(j) = value of optimal solution to the problem consisting of job requests,,..., j. Case : OPT selects job j can't use incompatible jobs { p(j) +, p(j) +,..., j } must include optimal solution to problem consisting of remaining compatible jobs,,..., p(j) Case : OPT does not select job j optimal substructure must include optimal solution to problem consisting of remaining compatible jobs in the set of jobs,,..., j OPT( j) 0 if j 0 max v j OPT( p( j)), OPT( j ) otherwise

BACKUP

Ford-Fulkerson Example (~Lecture ) () Draw final residual graph; () write down the value of the maximum flow; (3) write down the minimum cut after executing Ford- Fulkerson in the network below, with: Augmenting Path : S-A-D-T ( units of flow) Augmenting Path : S-C-E-T ( unit of flow) Augmenting Path 3: S-B-D-T ( units of flow) A D 3 5 S 0 B T 4 C 6 G E

Ford-Fulkerson Example A D 3 5 A D 3 S 0 B T S 0 B T 4 4 C 6 E C 6 E G Total Flow = 0 Gf Total Flow = 0 + =

Ford-Fulkerson Example A D 3 A D 3 S 4 0 B C 6 E T S 3 0 B C 5 E T Gf Total Flow = Gf Total Flow = + = 3

Ford-Fulkerson Example E T A B C D S 3 5 0 3 E T A B C D S 3 4 5 8 Gf Total Flow = 3 Gf 3 Total Flow = 3 + = 5

Ford-Fulkerson Example S 3 8 A B C D 4 5 E T Gf 3 Total Flow = 5 No directed S-T paths exist in Gf 3. Flow can not be incremented further, and the final/max flow value is 5 The vertices in Gf 3 can be divided into two sets, L = vertices reachable from S (including S) = {S, A, B, C, E}, and R = V\L = {D, T} These two sets define a minimum cut in the network

Ford-Fulkerson Example S 3 8 A B C D 4 5 E T S A B C D /3 4/5 0/ / /4 /0 / /6 E T / Gf 3 Total Flow = 5 Flow in original network

Ford-Fulkerson Example A D 3 5 R S 4 0 B L T C 6 E G Flow is upper-bounded by the capacity of any L-R cut. HerewehaveanL-Rcut(L R edges AD, BD, ET) with capacity + + = 5. We have a flow with value 5, which is maximum. Value of the maximum flow is equal to the capacity of the minimum cut.

Ford-Fulkerson (some notes) Works only with non-negative integer capacities. The value of S-T flow increases with every iteration (finding an augmenting path in the residual graph). The value of S-T flow must increase by at least in every iteration, so the maximum number of iterations is equal to the value of the maximum flow. In each iteration we are finding an S-T path which takes O(E) time.