Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Similar documents
ECS122A Handout on NP-Completeness March 12, 2018

1. Introduction Recap

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Spring Lecture 21 NP-Complete Problems

INTRO TO COMPUTATIONAL COMPLEXITY

CS 583: Algorithms. NP Completeness Ch 34. Intractability

Design and Analysis of Algorithms

NP Completeness and Approximation Algorithms

Summer School on Introduction to Algorithms and Optimization Techniques July 4-12, 2017 Organized by ACMU, ISI and IEEE CEDA.

NP-COMPLETE PROBLEMS. 1. Characterizing NP. Proof

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

CMSC 441: Algorithms. NP Completeness

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

NP and Computational Intractability

CS/COE

4. How to prove a problem is NPC

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Lecture 4: NP and computational intractability

Correctness of Dijkstra s algorithm

NP and Computational Intractability

Polynomial-Time Reductions

Unit 1A: Computational Complexity

Algorithms Design & Analysis. Approximation Algorithm

NP-Completeness. NP-Completeness 1

Computational Intractability 2010/4/15. Lecture 2

8.5 Sequencing Problems

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

NP Complete Problems. COMP 215 Lecture 20

CS 5114: Theory of Algorithms. Tractable Problems. Tractable Problems (cont) Decision Problems. Clifford A. Shaffer. Spring 2014

Polynomial-time Reductions

8. INTRACTABILITY I. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley. Last updated on 2/6/18 2:16 AM

Theory of Computation CS3102 Spring 2014 A tale of computers, math, problem solving, life, love and tragic death

Introduction to Complexity Theory

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

COP 4531 Complexity & Analysis of Data Structures & Algorithms

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

Computational Complexity

NP-Complete Problems. More reductions

CS 5114: Theory of Algorithms

SAT, Coloring, Hamiltonian Cycle, TSP

P, NP, NP-Complete, and NPhard

Essential facts about NP-completeness:

Limitations of Algorithm Power

Graduate Algorithms CS F-21 NP & Approximation Algorithms

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

More NP-Complete Problems

Automata Theory CS Complexity Theory I: Polynomial Time

CS 320, Fall Dr. Geri Georg, Instructor 320 NP 1

Algorithm Design and Analysis

NP-Completeness. f(n) \ n n sec sec sec. n sec 24.3 sec 5.2 mins. 2 n sec 17.9 mins 35.

CS311 Computational Structures. NP-completeness. Lecture 18. Andrew P. Black Andrew Tolmach. Thursday, 2 December 2010

NP-Completeness and Boolean Satisfiability

NP-Complete Reductions 2

1.1 P, NP, and NP-complete

Announcements. Friday Four Square! Problem Set 8 due right now. Problem Set 9 out, due next Friday at 2:15PM. Did you lose a phone in my office?

A An Overview of Complexity Theory for the Algorithm Designer

Theory of Computer Science. Theory of Computer Science. E5.1 Routing Problems. E5.2 Packing Problems. E5.3 Conclusion.

NP Completeness. CS 374: Algorithms & Models of Computation, Spring Lecture 23. November 19, 2015

Show that the following problems are NP-complete

VIII. NP-completeness

Computability and Complexity Theory: An Introduction

2 COLORING. Given G, nd the minimum number of colors to color G. Given graph G and positive integer k, is X(G) k?

Theory of Computation Time Complexity

CSE 3500 Algorithms and Complexity Fall 2016 Lecture 25: November 29, 2016

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

P is the class of problems for which there are algorithms that solve the problem in time O(n k ) for some constant k.

Computers and Intractability. The Bandersnatch problem. The Bandersnatch problem. The Bandersnatch problem. A Guide to the Theory of NP-Completeness

Algorithms and Theory of Computation. Lecture 22: NP-Completeness (2)

Algorithm Design and Analysis

Computers and Intractability

More on NP and Reductions

CS Algorithms and Complexity

NP-completeness. Chapter 34. Sergey Bereg

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

P P P NP-Hard: L is NP-hard if for all L NP, L L. Thus, if we could solve L in polynomial. Cook's Theorem and Reductions

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

Comp487/587 - Boolean Formulas

Recap from Last Time

Computational complexity theory

Data Structures in Java

SAT, NP, NP-Completeness

Instructor N.Sadagopan Scribe: P.Renjith. Lecture- Complexity Class- P and NP

NP and NP-Completeness

NP-Completeness. Subhash Suri. May 15, 2018

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

Principles of Computing, Carnegie Mellon University. The Limits of Computing

Introduction. Pvs.NPExample

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan August 30, Notes for Lecture 1

Intro to Theory of Computation

NP-Completeness Part II

Geometric Steiner Trees

Una descrizione della Teoria della Complessità, e più in generale delle classi NP-complete, possono essere trovate in:

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

The Class NP. NP is the problems that can be solved in polynomial time by a nondeterministic machine.

Theory of Computation Chapter 9

Chapter 8. NP and Computational Intractability

NP and NP-Completeness

Lecture 18: P & NP. Revised, May 1, CLRS, pp

Instructor N.Sadagopan Scribe: P.Renjith

Transcription:

Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr

The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case complexity is bounded by a polynomial function of the input size. A problem is said to be polynomially bounded if there is a polynomially bounded algorithm for it. Definition 13.3 The class P P is the class of decision problems that are polynomially bounded. 2

The Class P Why polynomial bound as the criterion for the definition? Problems not in P are definitely hard. Polynomials have closure property. Any algorithm built from several polynomially bounded algorithms will also be polynomially bounded. No smaller class of functions has this property. Polynomial bound makes P independent of the particular formal model of computation used. A problem that requires (f(n)) steps on one model may require more than (f(n)) steps on another, but for virtually all of the realistic models, if a problem is polynomially bounded for one, then it is so for the others. 3

The Class NP Informal definition NP is the class of decision problems for which a given proposed solution for a given input can be checked quickly (in polynomial time) to see if it really is a solution. Many decision problems are phrased as existence question. A proposed solution (certificate) is simply an object of the appropriate kind it may or may not satisfy the criteria. 4

The Class NP Definition 13.4 Nondeterministic algorithm A nondeterministic algorithm has two phases and an output step: 1. Nondeterministic guessing phase: Arbitrary string of characters, s, is written. (A solution is proposed.) 2. Deterministic verifying phase: Read the decision problem s input and optionally s. Returns a value true or false, or it may get in an infinite loop. 3. Output step: If the verifying phase returned true, the algorithm outputs yes. Otherwise, there is no output. 5

The Class NP Example : Nondeterministic graph coloring (Figure 13.1) Input: k, n, edges of G The first phase will write s = c 1, c 2,, c q. The second phase assigns ci to vi and 1. Check if q = n. 2. Check if each c i is in the range 1,, k. 3. For each edge v i v j check if c i c j. A nondeterministic algorithm is said to be polynomially bounded if there is a polynomial p such that for each input of size n for which the answer is yes, there is some execution of the algorithm that produces yes in at most p(n) steps. 6

The Class NP 7

The Class NP Definition 13.5 The class NP NP is the class of decision problems for which there is a polynomially bounded nondeterministic algorithm. (NP : Nondeterministic Polynomially bounded.) Theorem 13.1 Previous sample problems are all in NP. Theorem 13.2 P NP. Proof. A deterministic algorithm is a special case of a nondeterministic algorithm (executing only the second phase). 8

The Class NP The big question: P = NP? or P NP? Is nondeterminism more powerful than determinism? The trouble is that there are exponential number of strings to check. ( character set = c c p(n) strings of length p(n) ) Can we devise an algorithm that does not have to examine all possible solutions? We do not know good enough tricks yet. There are no polynomially bounded algorithms known for many problems in NP, but no larger-than-polynomial lower bounds have been proved for these problems. 9

P and NP : Some Sample Problems Exponential algorithms are useless except for very small inputs. Intractable problems: No efficient algorithms have been identified. Not proved that the problem requires a lot of time. Many optimization problems are of this nature. 10

P and NP : Some Sample Problems Problem 13.6 Satisfiability CNF(conjunctive normal form): a sequence of clauses separated by. clause: a sequence of literals separated by. literal: a propositional variable or the negation of a propositional variable Decision problem: Is there a truth assignment for the variables in the expression so that the expression has the value true? 11

P and NP : Some Sample Problems Problem 13.1 Graph coloring A coloring of a graph G = (V, E) is a mapping C: V S, where S is a finite set (of colors), such that if vw E then C(v) C(w). The chromatic number of G, (G), is the smallest number of colors needed to color G. Optimization Problem: Given G, determine (G). Decision Problem: Given G and a k, is there a coloring of G using at most k colors? (k-colorable?) 12

P and NP : Some Sample Problems Problem 13.1 Graph coloring Example : Exam schedule (k time slots) V set of courses E pair of courses that should not be at the same time G(V, E) should be k-colorable. 13

P and NP : Some Sample Problems Problem 13.2 Job scheduling with penalties n jobs J 1, J 2,, J n executed one at a time execution times: t 1, t 2,, t n deadlines: d 1, d 2,, d n penalties for missing the deadlines: p 1, p 2,, p n A schedule is a permutation of {1, 2,, n}, where J (i) is the job done the ith. Total penalty for a particular schedule n is P j 1 P j where P j is the penalty for the jth job, and is defined as P j = p (j) if Job J (j) completes after the deadline d (j), otherwise P j = 0. 14

P and NP : Some Sample Problems Problem 13.2 Job scheduling with penalties Optimization problem: Determine the minimum possible penalty. Decision problem: Given k, is there a schedule such that P k? 15

P and NP : Some Sample Problems Problem 13.3 Bin packing Unlimited number of bins with capacity 1. n objects with sizes s 1, s 2,, s n, where 0 s n 1. Optimization problem: Determine the smallest number of bins for packing. Decision problem: Given k, do objects fit in k bins? Example : Filling orders for a product (e.g., fabric) to be cut from large, standard-sized pieces. 16

P and NP : Some Sample Problems Problem 13.4 Knapsack A knapsack of capacity C. n objects with sizes s 1, s 2,, s n. Profits: p 1, p 2,, p n. Optimization problem: Find the largest total profit of any subset of objects that fits in the knapsack. Decision problem: Given k, is there a subset of objects that fits and has a total profit k? 17

P and NP : Some Sample Problems Problem 13.5 Subset sum Positive integer C. n positive integer sizes: s 1, s 2,, s n. Optimization problem: Determine the largest subset sum C. Decision problem: Is there a subset whose subset sum = C? 18

P and NP : Some Sample Problems Problem 13.7 Hamiltonian cycles and Hamiltonian paths A Hamiltonian cycle in an undirected graph is a simple cycle that passes through every vertex exactly once. The word circuit in place of cycle is sometimes seen. A Hamiltonian path in an undirected graph is a simple path that passes through every vertex exactly once. Decision problem: Does a given undirected graph have a Hamiltonian cycle (path)? 19

P and NP : Some Sample Problems Problem 13.8 Traveling salesperson (Minimum tour) The salesperson wants to minimize the total traveling cost required to visit all the cities in a territory, and return to the starting point. Optimization problem: Given a complete, weighted graph, find a minimum-weight Hamiltonian cycle. Decision problem: Given a complete, weighted graph and an integer k, is there a Hamiltonian cycle with total weight k? 20

NP Complete Problems: Polynomial Reductions Suppose we want to solve a problem P. We have an algorithm for Q. We have a function T that takes an input x for P and produces T(x), an input for Q such that the correct answer for P on x is yes if and only if the correct answer for Q on T(x) is yes. Then, by composing T and the algorithm for Q, we have an algorithm for P. See Figure 13.2. 21

NP Complete Problems: Polynomial Reductions 22

NP Complete Problems: Polynomial Reductions Example : A simple reduction P: Given a sequence of Boolean values, does at least one of them have the value true? Q: Given a sequence of integers, is the maximum of the integers positive? T(x 1, x 2,, x n ) = (y 1, y 2,, y n ) where y i = 1 if x i = true, and y i = 0 if x i = false. Clearly, an algorithm to solve Q, when applied to y 1, y 2,, y n, solves P for x 1, x 2,, x n. 23

NP Complete Problems: Polynomial Reductions Definition 13.6 Polynomial reduction and reducibility Let T be a function from the input set for a decision problem P into the input set for a decision problem Q. T is a polynomial reduction (also called a polynomial transformation) from P to Q if all of the following hold: 1. T can be computed in polynomially bounded time. 2. For every string x, if x is a yes input for P, then T(x) is a yes input for Q. 3. For every string x, if x is a no input for P, then T(x) is a no input for Q. It is usually easier to prove the contrapositive of part 3. 3. For every x, if T(x) is a yes input for Q, then x is a yes input for P. 24

NP Complete Problems: Polynomial Reductions Definition 13.6 (cont d) Problem P is polynomially reducible (also called polynomially transformable) to Q if there exists a polynomial transformation from P to Q. (We usually just say P is reducible to Q) The notation P P Q is used to indicate that P is reducible to Q. P P Q Q is at least as hard to solve as P. 25

NP Complete Problems: Polynomial Reductions Theorem 13.3 If P P Q and Q is in P, then P is in P. Proof. p polynomial bound on T q polynomial bound on algorithm for Q x input for P of size n The size of T(x) is at most p(n). If T(x) is given to the algorithm for Q, it does at most q(p(n)) steps. So the total for transformation T and the use of the Q algorithm is p(n) + q(p(n)) a polynomial in n. 26

NP Complete Problems: Polynomial Reductions Definition 13.7 NP -hard and NP -complete A problem Q is NP -hard if every problem P in NP is reducible to Q; that is, P P Q. A problem Q is NP -complete if it is in NP and is NP -hard. Being NP -hard constitutes a lower bound on the problem. Being in NP constitutes an upper bound. Theorem 13.4 If any NP -complete problem is in P, then P = NP. How valuable it would be to find a polynomially bounded algorithm for any NP -complete problem. How unlikely it is that such an algorithm exists because there are so many problems in NP for which polynomially bounded algorithms have been sought without success. 27

NP Complete Problems: Polynomial Reductions To show that some problem Q is NP -hard, it is necessary to show that all problems in NP are reducible to Q. How would this be possible? The first proof that a certain problem actually is NP - complete stands as one of the major accomplishments! Theorem 13.5 (Cook s theorem) The satisfiability problem is NP -complete. Theorem 13.6 (Cook s theorem) Graph coloring, Hamiltonian cycle, Hamiltonian path, job scheduling with penalties, bin packing, the subset sum problem, the knapsack problem, and the traveling salesperson problem are all NP -complete. 28

NP Complete Problems: Polynomial Reductions To prove that a problem Q NP is NP -complete, it suffices to prove that some other NP -complete problem is polynomially reducible to Q. the reducibility relation is transitive (Be careful about the direction of the reduction!) To show Q NP is NP -complete, Choose some known NP -complete problem P. Show P P Q. Then, all problems R NP satisfy R P Q. Therefore, Q is NP -complete. 29

NP Complete Problems: Polynomial Reductions Theorem 13.7 The directed Hamiltonian cycle problem is reducible to the undirected Hamiltonian cycle problem. Proof. Let G =(V, E) be a directed graph with n vertices. G is transformed into the undirected graph G =(V, E ) where V = { v i v V, i = 1, 2, 3 } E = { v 1 v 2, v 2 v 3 v V} { v 3 w 1 vw E } (See Figure 13.3) The transformation is straightforward, and G can be constructed in polynomially bounded time. If V = n and E = m, then V = 3n and E = 2n + m. 30

NP Complete Problems: Polynomial Reductions 31

NP Complete Problems: Polynomial Reductions Proof. (cont d) We have to show that G has a directed Hamiltonian cycle G has an undirected Hamiltonian cycle. ( ) Suppose G has a Hamiltonian cycle v 1, v 2,, v n. Then v 11, v 12, v 13, v 21, v 22, v 23,, v n1, v n2, v n3, is an undirected Hamiltonian cycle. ( ) Suppose G has a Hamiltonian cycle. The three vertices, say v 1, v 2, v 3, that correspond to one vertex in G must be traversed consecutively in the order v 1, v 2, v 3 or v 3, v 2, v 1 because v 2 cannot be reached from any other vertex in G. 32

NP Complete Problems: Polynomial Reductions Proof. (cont d) Since the other edges in G connects vertices with superscripts 1 and 3, if the order in one triple is 1, 2, 3, then the order is 1, 2, 3 for all triples. Otherwise, it is 3, 2, 1 for all triples. Since G is undirected, we may assume that its Hamiltonian cycle is v i11, v i12, v i13,, v in1, v in2, v in3. Then v i1, v i2,, v in is a directed Hamiltonian cycle for G. The vertex v 2 is introduced to force the vertices that correspond to v to appear together in any cycle in G. 33

Optimization Problems and Decision Problems Three kinds of problems listed in order of increasing difficulty: 1. Decision problem: Is there a solution better than some given bound? 2. Optimal value: What is the value of a best possible solution? 3. Optimal solution: Find a solution that achieves the optimal value. Optimization problems are harder than the NP -complete decision problems (NP -hard). No polynomial verification algorithm is known that can determine if a proposed solution is an optimal solution. 34

Optimization Problems and Decision Problems What if P = NP? If we had polynomial time algorithms for the decision problems, could we find the optimal solution value in polynomial time? (In many cases, we could.) Example : Graph coloring cancolor(g, k) = true chromaticnumber(g) for (k = 1; k n; k++) if (cancolor(g, k)) break; return k; G is k-colorable If cancolor runs in polynomial time, so does the whole program. 35

Optimization Problems and Decision Problems Example : Hamiltonian cycle tspbound(g, k) = true tspmin(g) for (k = 1; k ; k++) if (tspbound(g, k)) break; return k; there is a tour of cost k Let W be the maximum of the edge weights. Since there are n edges in Hamiltonian cycle, the weight of a minimum tour is at most nw nw iterations. As the discussion on input size indicates, this is not good enough to conclude that the program runs in polynomial time. 36

Approximation Algorithms Approximation (or heuristic) algorithms: Fast (polynomially-bounded) algorithms that give near optimal solution. Consider a particular optimization problem and an input I. FS(I): set of feasible solutions for I (objects of right type, not necessarily optimal) Example : graph coloring G = (V, E) FS(G) = {C: V {1,, V } s.t. C(v) C(w) if vw E} set of all colorings 37

Approximation Algorithms val(i, x): value of the optimization parameter achieved by the feasible solution x, for input instance I Example : graph coloring val(g, C) = C(V) # of colors used opt(i) = best {val(i, x) x FS(I)} An optimal solution for I is an x FS(I) such that val(i, x) = opt(i). Approximation algorithm: A polynomial-time algorithm that, when given input I, outputs an element of FS(I). 38

Approximation Algorithms A: approximation algorithm A(I): feasible solution A chooses for I val( I, A( I)) ra ( I) for minimizati on problems, opt( I) opt( I) ra ( I) for maximization problems. val( I, A( I)) r A (I) 1 measure for the goodness Consider the worst-case scenario R A (m) = max {r A (I) I such that opt(i) = m}, S A (n) = max {r A (I) I of size n}. 39

Computer Algorithms (Backtracking & Branch-and-Bound)

Finding Solutions to NP -complete Problems Many of the NP -complete problems arise in real-life applications NP -complete problems cannot be solved in polynomial time. We need techniques for finding solutions to such hard problems in a reasonable amount of time. 41

Backtracking (From 13.5.1 of Michael T. Goodrich & Roberto Tamassia) Takes advantage of the inherent structure of the NP -complete problems: A certificate consists of a set of choices. The certificate can be checked in polynomial time by testing whether or not it demonstrates a successful configuration of a problem instance. Searches through a large (exponential-size) set of possibilities in a systematic way: Traverses through possible search paths to locate solutions or dead ends. 42

Backtracking The search space consists of the configurations (x, y). x: the remaining subproblem to be solved y: the set of choices that have been made to get to this subproblem from the original problem instance The backtracking strategy: Starts from (x, ). x is the original problem instance Generates new configurations by making a small set of additional choices. Backtracks at dead ends to another configuration. 43

Backtracking 44

Backtracking To turn the strategy into an actual algorithm: 1. Define a way of selecting the most promising candidate configuration from the frontier set F. 2. Specify the way of expanding a configuration (x, y) into subproblem configurations. The expansion process should, in principle, be able to generate all feasible configurations. 3. Describe how to perform a simple consistency check for a configuration (x, y) that returns solution found, dead end, or continue. The frontier F can be a stack (depth-first search), or a queue (breadth-first search), or others. 45

Backtracking Example : A Backtracking Algorithm for CNF-SAT S : a Boolean expression in CNF Configuration (S, y) y : an assignment S : resultant formula by applying y to S The most promising choice is the subformula S with the smallest clause. Such a formula would hit a dead end most quickly Two new subproblems are generated by picking a variable x from a smallest clause in S, and assigning x = 1 and x = 0, respectively. 46

Backtracking Example : CNF-SAT (Cont d) To perform a consistency check for an assignment of x in S : Reduce any clauses containing x based on the assignment. If the resultant clause contains a single literal y or y, Assign a value to y to make this clause satisfied. Propagate the assignment. Repeat this process until we have no more single-literal clauses. If we discover a contradiction, return dead end. If we reduce S to 1, return solution found. Otherwise, we derive a new formula S such that each clause has at least two literals ( continue ). 47

Backtracking Example : CNF-SAT (Cont d) The worst-case running time for this algorithm is still exponential, but the backtracking can often speed things up. If every clause in the formula has at most two literals, this algorithm runs in polynomial time. 48

Branch-and-Bound (From 9.7. of Gilles Brassard & Paul Bratley) Backtracking is not designed for optimization problems: In addition to some feasibility conditions the optimization problems have an objective function to be minimized or maximized. Branch-and-bound is an extension of backtracking: Search continues even after a solution is found until the best solution is found. The algorithm calculates a bound on the possible values of any solutions that might lie farther on in the branch. If such solutions are necessarily worse than the best solution found so far, that branch is pruned. 49

Branch-and-Bound Often, the calculated bound is also used to choose which open path looks the most promising. As the bound gets more accurate, the algorithm s efficiency improves. Example : The Assignment Problem Assign n tasks to n agents, each task being performed by a single agent, in such a way that the total cost of executing the n tasks is minimized. 50

Branch-and-Bound Example : The Assignment Problem (Cont d) The cost matrix (c ij : the cost of task j when performed by agent i) 1 2 3 4 a 11 12 18 40 b 14 15 13 22 c 11 17 19 23 d 17 14 20 28 An upper bound on the answer can be obtained by considering one possible solution: a 1, b 2, c 3, d 4 cost : 11 + 15 + 19 + 28 = 73 An optimal solution cannot cost more than this. 51

Branch-and-Bound A lower bound can be obtained by adding the smallest elements in each column or row: 11 + 12 + 13 + 22 = 58 (column) 11 + 13 + 11 + 14 = 49 (row) not so useful! The answer lies somewhere in [58..73]. Branch-and-bound search: Starts from the root (no assignment). At each level, fixes the assignment of an agent. At each node, calculates a bound on the solution obtainable by completing the current partial assignment. The bound is used to guide the search and also to prune the branches (search paths). 52

Branch-and-Bound lower bounds a 1 a 2 a 3 a 4 60 58 65 78* The node with the lowest lower bound is expanded next. Cut off because this lower bound is higher than the upper bound 73. 53

Branch-and-Bound a 1 60 a 2, b 1 68 a 2 a 2, b 3 59 a 3 65 a 2, b 4 64 a 4 78* 54

Branch-and-Bound a 1 a 2 a 3 60 65 a 2, b 1 68 a 2, b 3 a 2, b 4 64 a 2, b 3, c 1, d 4 64 a 2, b 3, c 4, d 1 65 a 4 78* Now, 64 is a new upper bound. 55

Branch-and-Bound a 1 a 2 a 3 a 4 60 65* 78* a 2, b 1 68* a 2, b 3, c 1, d 4 64 a 2, b 3 a 2, b 3, c 4, d 1 65* a 2, b 4 64* Can be eliminated if we only want one solution. 56

Branch-and-Bound a 1, b 2 68* a 1 a 1, b 3 a 1, b 4 61 66* a 2 a 3 a 4 65* 78* a 2, b 1 68* a 2, b 3, c 1, d 4 64 a 2, b 3 a 2, b 3, c 4, d 1 65* a 2, b 4 64* 57

Branch-and-Bound a 1 a 2 a 3 a 4 65* 78* a 1, b 2 68* a 1, b 3 a 1, b 4 a 1, b 3, c 2, d 4 69* a 1, b 3, c 4, d 2 61 66* a 2, b 1 68* a 2, b 3, c 1, d 4 64 a 2, b 3 a 2, b 3, c 4, d 1 65* a 2, b 4 64* Now, 61 is a new upper bound. 58

Branch-and-Bound a 1 a 2 a 3 a 4 65* 78* a 1, b 2 68* a 1, b 3 a 1, b 4 a 1, b 3, c 2, d 4 69* a 1, b 3, c 4, d 2 61 66* a 2, b 1 68* a 2, b 3, c 1, d 4 64* a 2, b 3 a 2, b 3, c 4, d 1 65* a 2, b 4 64* 61 is the optimum (minimum) value because all the other nodes are cut off. 59