ICS 252 Introduction to Computer Design

Similar documents
SI545 VLSI CAD: Logic to Layout. Algorithms

Unit 1A: Computational Complexity

P, NP, NP-Complete, and NPhard

IS 709/809: Computational Methods in IS Research Fall Exam Review

CS/COE

Discrete Wiskunde II. Lecture 5: Shortest Paths & Spanning Trees

Algorithm Design Strategies V

Algorithms. Grad Refresher Course 2011 University of British Columbia. Ron Maharik

Lecture 18: P & NP. Revised, May 1, CLRS, pp

CSC 421: Algorithm Design & Analysis. Spring 2018

Easy Problems vs. Hard Problems. CSE 421 Introduction to Algorithms Winter Is P a good definition of efficient? The class P

Contents Lecture 4. Greedy graph algorithms Dijkstra s algorithm Prim s algorithm Kruskal s algorithm Union-find data structure with path compression

Question Paper Code :

Undirected Graphs. V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11

NP-Completeness. Andreas Klappenecker. [based on slides by Prof. Welch]

CS 350 Algorithms and Complexity

NP-Completeness. NP-Completeness 1

Introduction to Discrete Optimization

Limitations of Algorithm Power

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Discrete Optimization 2010 Lecture 2 Matroids & Shortest Paths

Algorithms and Their Complexity

CS 350 Algorithms and Complexity

Analysis of Algorithms

Example: Fib(N) = Fib(N-1) + Fib(N-2), Fib(1) = 0, Fib(2) = 1

Lecture 1: Asymptotics, Recurrences, Elementary Sorting

Computer Science 385 Analysis of Algorithms Siena College Spring Topic Notes: Limitations of Algorithms

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Computational Complexity and Intractability: An Introduction to the Theory of NP. Chapter 9

Big-O Notation and Complexity Analysis

Reductions. Reduction. Linear Time Reduction: Examples. Linear Time Reductions

Module 1: Analyzing the Efficiency of Algorithms

Admin NP-COMPLETE PROBLEMS. Run-time analysis. Tractable vs. intractable problems 5/2/13. What is a tractable problem?

Growth of Functions (CLRS 2.3,3)

Module 1: Analyzing the Efficiency of Algorithms

Analysis of Algorithms

NP-Completeness. Until now we have been designing algorithms for specific problems

Introduction to Complexity Theory

VIII. NP-completeness

Data Structures in Java

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

More on NP and Reductions

FINAL EXAM PRACTICE PROBLEMS CMSC 451 (Spring 2016)

2 GRAPH AND NETWORK OPTIMIZATION. E. Amaldi Introduction to Operations Research Politecnico Milano 1

Graduate Algorithms CS F-21 NP & Approximation Algorithms

Analysis of Algorithms [Reading: CLRS 2.2, 3] Laura Toma, csci2200, Bowdoin College

Instructor N.Sadagopan Scribe: P.Renjith. Lecture- Complexity Class- P and NP

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 22 Lecturer: David Wagner April 24, Notes 22 for CS 170

CSCE 750 Final Exam Answer Key Wednesday December 7, 2005

1 Some loose ends from last time

Discrete Optimization 2010 Lecture 1 Introduction / Algorithms & Spanning Trees

Combinatorial Optimization

CSC Design and Analysis of Algorithms. Lecture 1

NATIONAL UNIVERSITY OF SINGAPORE CS3230 DESIGN AND ANALYSIS OF ALGORITHMS SEMESTER II: Time Allowed 2 Hours

CS 4407 Algorithms Lecture 2: Iterative and Divide and Conquer Algorithms

NP-Completeness. Algorithmique Fall semester 2011/12

NP-Complete Problems. More reductions

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Ch01. Analysis of Algorithms

Algorithms Exam TIN093 /DIT602

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007

Introduction to Computer Science and Programming for Astronomers

Proof methods and greedy algorithms

Algorithm Design and Analysis

Examination paper for TDT4120 Algorithms and Data Structures

Each internal node v with d(v) children stores d 1 keys. k i 1 < key in i-th sub-tree k i, where we use k 0 = and k d =.

Matching Residents to Hospitals

Computational Complexity

1. Introduction Recap

MA008/MIIZ01 Design and Analysis of Algorithms Lecture Notes 2

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Ch 01. Analysis of Algorithms

Algorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University

Big O 2/14/13. Administrative. Does it terminate? David Kauchak cs302 Spring 2013

Lecture 14 - P v.s. NP 1

NP-Completeness. Subhash Suri. May 15, 2018

from notes written mostly by Dr. Matt Stallmann: All Rights Reserved

Chapter 2: The Basics. slides 2017, David Doty ECS 220: Theory of Computation based on The Nature of Computation by Moore and Mertens

1 Primals and Duals: Zero Sum Games

Martin Milanič

Analysis of Algorithms. Outline. Single Source Shortest Path. Andres Mendez-Vazquez. November 9, Notes. Notes

Instructor N.Sadagopan Scribe: P.Renjith

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

CS 4407 Algorithms Lecture: Shortest Path Algorithms

Design and Analysis of Algorithms

b + O(n d ) where a 1, b > 1, then O(n d log n) if a = b d d ) if a < b d O(n log b a ) if a > b d

Introduction. An Introduction to Algorithms and Data Structures

Cpt S 223. School of EECS, WSU

CS 301: Complexity of Algorithms (Term I 2008) Alex Tiskin Harald Räcke. Hamiltonian Cycle. 8.5 Sequencing Problems. Directed Hamiltonian Cycle

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

NP-completeness. Chapter 34. Sergey Bereg

Graph Theory and Optimization Computational Complexity (in brief)

Outline. 1 NP-Completeness Theory. 2 Limitation of Computation. 3 Examples. 4 Decision Problems. 5 Verification Algorithm

CS Analysis of Recursive Algorithms and Brute Force

CSE 202 Homework 4 Matthias Springer, A

Introduction to Algorithms

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Dynamic Programming: Shortest Paths and DFA to Reg Exps

Transcription:

ICS 252 fall 2006 Eli Bozorgzadeh Computer Science Department-UCI

References and Copyright Textbooks referred [Mic94] G. De Micheli Synthesis and Optimization of Digital Circuits McGraw-Hill, 1994. [CLR90] T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms MIT Press, 1990. [Sar96] M. Sarrafzadeh, C. K. Wong An Introduction to VLSI Physical Design McGraw-Hill, 1996. [She99] N. Sherwani Algorithms For VLSI Physical Design Automation Kluwer Academic Publishers, 3 rd edition, 1999. 2

References and Copyright (cont.) Slides all from: http://mountains.ece.umn.edu/~kia/courses/ee5301/02- Alg/EE5301-Alg.ppt Slides references: [ Sherwani] Naveed A. Sherwani, 1992 (companion slides to [She99]) [ Keutzer] Kurt Keutzer, Dept. of EECS, UC-Berekeley http://www-cad.eecs.berkeley.edu/~niraj/ee244/index.htm [ Gupta] Rajesh Gupta UC-San Diego http://www.ics.uci.edu/~rgupta/ics280.html 3

Reading Assignment Textbook: Chapter 2 Sections 2.1.2.5 Lecture note: Lecture note 2 4

Combinatorial Optimization Problems with discrete variables Examples: SORTING: given N integer numbers, write them in increasing order KNAPSACK: given a bag of size S, and items {(s 1, w 1 ), (s 2, w 2 ),..., (s n, w n )}, where s i and w i are the size and value of item i respectively, find a subset of items with maximum overall value that fit in the bag. More examples: http://www.research.compaq.com/src/jcat/ A problem vs. problem instance Problem complexity: Measures of complexity Complexity cases: average case, worst case Tractability - solvable in polynomial time [ Gupta] 5

Algorithm An algorithm defines a procedure for solving a computational problem Examples: Quick sort, bubble sort, insertion sort, heap sort Dynamic programming method for the knapsack problem Definition of complexity Run time on deterministic, sequential machines Based on resources needed to implement the algorithm Needs a cost model: memory, hardware/gates, communication bandwidth, etc. Example: RAM model with single processor running time # operations [ Gupta] 6

Algorithm (cont.) Definition of complexity (cont.) Example: Bubble Sort Scalability with respect to input size is important How does the running time of an algorithm change when the input size doubles? Function of input size (n). Examples: n 2 +3n, 2 n, n log n,... Generally, large input sizes are of interest (n > 1,000 or even n > 1,000,000) What if I use a better compiler? What if I run the algorithm on a machine that is 10x faster? for for (j=1 (j=1;; j< j< N; N; j++) {{ for for (i=j; (i=j; i i < N-1; i++) {{ if if (a[i] (a[i] > a[i+1]) {{ hold = a[i]; a[i]; a[i] a[i] = a[i+1]; a[i+1] = hold; }} }} }} [ Gupta] 7

Function Growth Examples 200 160 120 80 40 0 Function Growth f(n) 5n 0.1n^2+2n+40 2 n log n 0.00001 2^n 10000 8000 6000 4000 2000 0 1 6 11 16 21 26 n Function Growth f(n) 5n 0.1n^2+2n+40 2 n log n 0.00001 2^n fall 2006 10 210 410 ICS 252 610 810 1010 n 8

Asymptotic Notions Idea: A notion that ignores the constants and describes the trend of a function for large values of the input Definition Big-Oh notation f(n) = O ( g(n) ) if constants K and n 0 can be found such that: n n 0, f(n) K. g(n) g is called an upper bound for f (f is of order g: f will not grow larger than g by more than a constant factor) Examples: 1/3 n 2 = O (n 2 ) (also O(n 3 ) ) 0.02 n 2 + 127 n + 1923 = O (n 2 ) 9

Asymptotic Notions (cont.) Definition (cont.) Big-Omega notation f(n) = Ω ( g(n) ) if constants K and n 0 can be found such that: n n 0, f(n) K. g(n) g is called a lower bound for f Big-Theta notation f(n) = Θ ( g(n) ) if g is both an upper and lower bound for f Describes the growth of a function more accurately than O or Ω Example: n 3 + 4 n Θ(n 2 ) 4 n 2 + 1024 = Θ (n 2 ) 10

Asymptotic Notions (cont.) How to find the order of a function? Not always easy, esp if you start from an algorithm Focus on the dominant term 4 n 3 + 100 n 2 + log n O(n 3 ) n + n log(n) n log (n) n! = K n > n K > log n > log log n > K n > log n, n log n > n, n! > n 10. What do asymptotic notations mean in practice? If algorithm A has time complexity O(n 2 ) and algorithm B has time complexity O(n log n), then algorithm B is better If problem P has a lower bound of Ω(n log n), then there is NO WAY you can find an algorithm that solves the problem in O(n) time. 11

Problem Tractability Problems are classified into easier and harder categories Class P: a polynomial time algorithm is known for the problem (hence, it is a tractable problem) Class NP (non-deterministic polynomial time): ~ polynomial solution not found yet (probably does not exist) exact (optimal) solution can be found using an algorithm with exponential time complexity Unfortunately, most CAD problems are NP Be happy with a reasonably good solution [ Gupta] 12

Algorithm Types Based on quality of solution and computational effort Deterministic Probabilistic or randomized Approximation Heuristics: local search Problem vs. algorithm complexity [ Gupta] 13

Deterministic Algorithm Types Algorithms usually used for P problems Exhaustive search! (aka exponential) Dynamic programming Divide & Conquer (aka hierarchical) Greedy Mathematical programming Branch and bound Algorithms usually used for NP problems (not seeking optimal solution, but a good one) Greedy (aka heuristic) Genetic algorithms Simulated annealing Restrict the problem to a special case that is in P 14

Graph Definition Graph: set of objects and their connections Formal definition: G = (V, E), V={v 1, v 2,..., v n }, E={e 1, e 2,..., e m } V: set of vertices (nodes), E: set of edges (links, arcs) Directed graph: e k = (v i, v j ) Undirected graph: e k ={v i, v j } Weighted graph: w: E R, w(e k ) is the weight of e k. a a e d b e d b c c 15

Graph Representation: Adjacency List a a b d c e d b b c a d a c d e e b d a a a b d e d b b c d a c d e a e 16

Graph Representation: Adjacency Matrix a a e d b b c c d e a b c d e 0 1 1 1 0 1 0 0 1 0 1 0 0 0 0 1 1 0 0 1 0 0 0 1 0 a a a 0 b c d e 1 0 1 0 e d c b b c d 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 e 0 0 0 0 0 17

Edge / Vertex Weights in Graphs Edge weights Usually represent the cost of an edge Examples: e Distance between two cities Width of a data bus Representation Adjacency matrix: instead of 0/1, keep weight Adjacency list: keep the weight in the linked list item Node weight Usually used to enforce some capacity constraint Examples: The size of gates in a circuit The delay of operations in a data dependency graph 1-2 d a 1 - a 3 2 0 1.5 c 5 b + 1 3 * b 5 18

Hypergraphs Hypergraph definition: Similar to graphs, but edges not between pairs of vertices, but between a set of vertices Directed / undirected versions possible Just like graphs, a node can be the source (or be connected to) multiple hyperedges Data structure? 1 3 1 3 5 5 2 4 2 4 19

Graph Search Algorithms Purpose: to visit all the nodes Algorithms Depth-first search Breadth-first search Topological Examples A B C D E B E G F A D C A B G F D E F G C [ Sherwani] 20

Depth-First Search Algorithm struct vertex {... int mark; }; dfs ( v ) v.marked 1 print v for each (v, u) E if (u.mark!= 1) // not visited yet? dfs (u) Algorithm DEPTH_FIRST_SEARCH ( V, E ) for each v V v.marked 0 // not visited yet for each v V if (v.marked == 0) dfs (v) 21

Breadth-First Search Algorithm bfs ( v, Q) v.marked 1 for each (v, u) E if (u.mark!= 1) // not visited yet? Q Q + u Algorithm BREADTH_FIRST_SEARCH ( V, E ) Q {v 0 } // an arbitrary node while Q!= {} v Q.pop() if (v.marked!= 1) print v bfs (v) // explore successors There is is something wrong with this code. What is is it? it? 22

Distance in (non-weighted) Graphs Distance d G (u,v): Length of a shortest u--v path in G. u v x d(u,v)=2 y d(x,y) = 23

Moor s Breadth-First Search Algorithm Objective: Find d(u,v) for a given pair (u,v) and a shortest path u--v How does it work? Do BFS, and assign λ(w) the first time you visit a node. λ(w)=depth in BFS. Data structure Q a queue containing vertices to be visited λ(w) length of the shortest path u--w (initially ) π(w) parent node of w on u--w 24

Moor s Breadth-First Search Algorithm Algorithm: 1. Initialize λ(w) for w u λ(u) 0 Q Q+u 2. If Q φ, x pop(q) else stop. no path u--v 3. (x,y) E, if λ(y)=, π(y) x λ(y) λ(x)+1 Q Q+y 4. If λ(v)= return to step 2 5. Follow π(w) pointers from v to u. 25

Moor s Breadth-First Search Algorithm u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 Q = u v 10 Node u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 λ 0 π - - - - - - - - - - - 26

Moor s Breadth-First Search Algorithm u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 Q = v 1, v 2, v 5 Node u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 λ 0 1 1 1 π - u u - - u - - - - - 27

Moor s Breadth-First Search Algorithm u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 Q = v 4, v 3, v 6, v 10 Node u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 λ 0 1 1 2 2 1 2 2 π - u u v 2 v 1 u v 5 - - - v 5 28

Moor s Breadth-First Search Algorithm u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 Q = v 7 Node u v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 λ 0 1 1 2 2 1 2 3 2 π - u u v 2 v 1 u v 5 v 4 - - v 5 29

Notes on Moor s Algorithm Why the problem of BREADTH_FIRST_SEARCH algorithm does not exist here? Time complexity? Space complexity? 30

Distance in Weighted Graphs Why a simple BFS doesn t work any more? Your locally best solution is not your globally best one λ=3 π=u v 1 3 2 v 2 u 11 λ=11 π=u First, v 1 and v 2 will be visited u v 2 should be revisited λ=3 π=u v 1 3 2 11 v 2 λ=5 π=v 1 31

Dijkstra s Algorithm Objective: Find d(u,v) for all pairs (u,v) (fixed u) and the corresponding shortest paths u--v How does it work? Start from the source, augment the set of nodes whose shortest path is found. decrease λ(w) from to d(u,v) as you find shorter distances to w. π(w) changed accordingly. Data structure: S the set of nodes whose d(u,v) is found λ(w) current length of the shortest path u--w π(w) current parent node of w on u w 32

Dijkstra s Algorithm Algorithm: 1. Initialize λ(v) for v u λ(u) 0 S {u} 2. For each v S s.t. u i v E, If λ(v) > λ(u i ) + w(u i v), λ(v) λ(u i ) + w(u i v) π(v) u i 3. Find m=min{λ(v) v S }and λ(v j )==m S S {v j } 4. If S < V, goto step 2 O(e) O(v 2 ) 33

Dijkstra s Algorithm - why does it work? Proof by contradiction Suppose v is the first node being added to S such that λ(v) > d(u 0,v) (d(u 0,v) is the real shortest u 0 --v path) The assumption that λ(v) and d(u 0,v) are different, means there are different paths with lengths λ(v) and d(u 0,v) Consider the path that has length d(u 0,v). Let x be the first node in S on this path d(u 0,v) < λ(v), d(u 0,v) λ(x)+ α => λ(x)< λ(v) => contradiction u 0 v x 34

Minimum Spanning Tree (MST) Tree (usually undirected): Connected graph with no cycles E = V - 1 Spanning tree Connected subgraph that covers all vertices If the original graph not tree, graph has several spanning trees Minimum spanning tree Spanning tree with minimum sum of edge weights (among all spanning trees) Example: build a railway system to connect N cities, with the smallest total length of the railroad 35

Difference Between MST and Shortest Path Why can t Dijkstra solve the MST problem? Shortest path: min sum of edge weight to individual nodes MST: min sum of TOTAL edge weights that connect all vertices Proposal: Pick any vertex, run Dijkstra and note the paths to all nodes (prove no cycles created) Debunk: show a counter example a 5 4 b 2 Write on c your slides! 36

Minimum Spanning Tree Algorithms Basic idea: Start from a vertex (or edge), and expand the tree, avoiding loops (i.e., add a safe edge) Pick the minimum weight edge at each step Known algorithms Prim: start from a vertex, expand the connected tree Kruskal: start with the min weight edge, add min weight edges while avoiding cycles (build a forest of small trees, merge them) 37

Prim s Algorithm for MST Data structure: S set of nodes added to the tree so far S set of nodes not added to the tree yet T the edges of the MST built so far λ(w) current length of the shortest edge (v, w) that connects w to the current tree π(w) potential parent node of w in the final MST (current parent that connects w to the current tree) 38

Prim s Algorithm Initialize S, S and T S {u 0 }, S V \ {u 0 } // u 0 is any vertex T { } v S, λ(v) Initialize λ and π for the vertices adjacent to u 0 For each v S s.t. (u 0,v) E, λ(v) ω ((u 0,v)) π(v) u 0 While (S!= φ) Find u S, s.t. v S, λ(u) λ(v) S S {u}, S S \ {u}, T T { (π(u), u) } For each v s.t. (u, v) E, If λ(v) > ω((u,v)) then λ(v) ω((u,v)) π(v) u 39

Other Graph Algorithms of Interest... Min-cut partitioning Graph coloring Maximum clique, independent set Min-cut algorithms Steiner tree Matching... 40