EE582 Physical Design Automation of VLSI Circuits and Systems

Size: px
Start display at page:

Download "EE582 Physical Design Automation of VLSI Circuits and Systems"

Transcription

1 EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Partitioning

2 What We Will Study Partitioning Practical examples Problem definition Deterministic algorithms Kernighan-Lin (KL) Fiduccia-Mattheyses (FM) h-metis Stochastic algorithms Simulated-annealing 2

3 Example Source: 3

4 Example ASIC Placement Source: Nam, Modern Circuit Placement 4

5 VLSI Circuits # interconnections Intra-module: many Inter-module: few Source: David, A Stochastic Wire-Length Distribution for Gigascale Integration (GSI) Part I: Derivation and Validation, TCAD 98 5

6 Problem Definition Given A set of cells: T = {c 1, c 2,, c n }. W =n. A set of edges (netlist): R = {e 1, e 2,, e m }. R =m. Cell size: s(c i ) Edge weight: w(e j ) # partitions: k (k-way partitioning). P = {P 1,, P k } Minimum partition size: b s(p i ) Balancing factor: max(s(p i )) min(s(p j )) B Graph representation: edges / hyper-edges Find k partitions P = {P 1,, P k } Minimize Cut size: ww(ee) ee(uu 1,,uuuu) pp(uuuu) pp(uuuu) 6

7 Problem Definition A set of cells T = {c 1, c 2,, c n }. W =n c 1 e 1 c 5 e 5 e c 3 3 c e 2 2 c 4 c 6 e 4 e 6 c 7 c 8 7

8 Problem Definition A set of edges (netlist, connectivity) R = {e 1, e 2,, e m }. R =m c 1 e 1 c 5 e 5 e c 3 3 c e 2 2 c 4 c 6 e 4 e 6 c 7 c 8 8

9 k-way Partitioning k=2, P i =4 c 1 c 5 c 1 c 5 c 7 c 7 c 3 c 3 c 2 c 6 c 8 c 2 c 6 c 8 c 4 c 4 P 1 = {c 1, c 2, c 3, c 4 } P 1 = {c 1, c 2, c 3, c 5 } P 2 = {c 5, c 6, c 7, c 8 } P 2 = {c 4, c 6, c 7, c 8 } Cut size = 3 Cut size = 3 9

10 Kernighan-Lin (KL) Algorithm Problem definition Given A set of vertices (cell list): V = {c 1, c 2,, c 2n }. T =2n. A set of two-pin edges (netlist): E = {e 1, e 2,, e m }. E =m. Weight of each edge: w(e j ) Vertex size: s(c i ) = 1 Constraints # partitions: 2 (two-way partitioning). P = {A, B} Balanced partitioning: A = B = n Minimize Cutsize 10

11 Kernighan-Lin (KL) Algorithm Cost function: cutsize = ww(ee) ee ψψ Ψ: cut set = {e 2, e 4, e 5 } Cutsize = w(e 2 ) + w(e 4 ) + w(e 5 ) c 1 e 1 e 2 e 3 c 4 c 2 c 5 e 4 c 3 e 5 c 6 A={c 1, c 2, c 3 } B={c 4, c 5, c 6 } 11

12 Kernighan-Lin (KL) Algorithm Algorithm Find an initial solution Improve the solution No Best? No Yes Keep the best one. No more improvement? Yes End 12

13 Kernighan-Lin (KL) Algorithm Iterative improvement A B A* B* X Y Y X Current Optimal How can we find X and Y? 13

14 Kernighan-Lin (KL) Algorithm Iterative improvement Find a pair of vertices such that swapping the two vertices reduces the cutsize. c a c b c b c a Cutsize = 4 Cutsize = 3 14

15 Kernighan-Lin (KL) Algorithm Gain computation For each cell a External cost = E a = ww(eeeeee) for all v B = ww(external edges) Internal cost = I a = ww(eeeeee) for all v A = ww(internal edges) D-value (a) = D a = E a I a For each pair (a, b) Gain = g ab = D a + D b 2w(e ab ) g ab = {(E a w(e ab )) I a }+ {(E b w(e ab )) I b } = D a + D b 2w(e ab ) E a = 4 I a = 2 D a = 2 a b E b = 1 I b = 0 D b = 1 A B 15

16 Kernighan-Lin (KL) Algorithm Find a best-gain pair among all the gate pairs c 1 c 2 c 5 c 3 c 6 c 4 E a I a D a c c c c c c g 12 = = -2 g 13 = = 0 g 14 = = +2 g 52 = = -1 g 53 = = -1 g 54 = = -1 g 62 = = -1 g 63 = = -1 g 64 = = -1 A B g ab = D a + D b 2w(e ab ) Cutsize = 3 16

17 Kernighan-Lin (KL) Algorithm Swap and Lock After swapping, we lock the swapped cells. The locked cells will not be moved further. c 4 c 2 E a I a D a c c 5 c 6 A c 3 c 1 B c c c c c Cutsize = 1 17

18 Kernighan-Lin (KL) Algorithm Update of the D-value Update the D-value of the cells affected by the move. D x = E x I x = {E x w(e xb ) + w(e xa )} {I x + w(e xb ) w(e xa )} = (E x I x ) + 2w(e xa ) 2w(e xb ) = D x + 2w(e xa ) 2w(e xb ) D y = (E y I y ) + 2w(e yb ) 2w(e ya ) = D y + 2w(e yb ) 2w(e ya ) x c a y c b 18

19 Kernighan-Lin (KL) Algorithm Update c 1 c 2 c 5 c 3 c 6 c 4 A B D a D a c 1 1 c = -1 c c 4 1 c = -2 c = -2 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 19

20 Kernighan-Lin (KL) Algorithm Gain computation and pair selection c 4 c 2 D a c 5 c 3 c 1 c 2-1 c 3-1 g 52 = = -3 g 53 = = -3 g 62 = = -3 g 63 = = -3 c 6 c 1 c 4 c 5-2 c 6-2 A B g ab = D a + D b 2w(e ab ) Cutsize = 1 20

21 Kernighan-Lin (KL) Algorithm Swap and update c 4 c 2 D a D a c 5 c 6 A c 3 c 1 B c 1 c = +1 c 3-1 c 4 c = 0 c 6-2 Cutsize = 1 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 21

22 Kernighan-Lin (KL) Algorithm Swap and update c 4 c 2 D a c 1 c 2 +1 c 5 c 6 c 3 c 3 c 1 c 4 c 5 0 A B c 6 Cutsize = 4 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 22

23 Kernighan-Lin (KL) Algorithm Gain computation c 4 c 2 D a c 1 c 2 +1 g 52 = = +1 c 5 c 6 c 3 c 3 c 1 c 4 c 5 0 A B c 6 g ab = D a + D b 2w(e ab ) Cutsize = 4 23

24 Kernighan-Lin (KL) Algorithm Swap c 4 c 5 c 2 c 6 c 3 c1 A B Cutsize = 3 24

25 Kernighan-Lin (KL) Algorithm Cutsize Initial: 3 g 1 = +2 After 1 st swap: 1 g 2 = -3 After 2 nd swap: 4 g 3 = +1 After 3 rd swap: 3 c 4 c 2 c 5 c 3 c 6 c 1 A B Cutsize = 1 25

26 Kernighan-Lin (KL) Algorithm Algorithm (a single iteration) 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step Find k maximizing G= gg ii kk ii=1 26

27 Kernighan-Lin (KL) Algorithm Algorithm (overall) 1. Run a single iteration. 2. Get the best partitioning result in the iteration. 3. Unlock all the cells. 4. Re-start the iteration. Use the best partitioning result for the initial partitioning of the next iteration. Stop criteria Max. # iterations Max. runtime Δ Cutsize between the two consecutive iterations. Target cutsize 27

28 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step Find k maximizing G= gg ii kk ii=1 28

29 Kernighan-Lin (KL) Algorithm Complexity of the D-value computation External cost (a) = E a = ww(ee aaaa ) for all v B Internal cost (a) = I a = ww(ee aaaa ) for all v A D-value (a) = D a = E a I a For each cell (node) a For each net connected to cell a Compute E a and I a Practically O(n) E a = 4 I a = 2 D a = 2 a b E b = 1 I b = 0 D b = 1 n: # nodes A B 29

30 Kernighan-Lin (KL) Algorithm Complexity of the gain computation g ab = D a + D b 2w(e ab ) For each pair (a, b) g ab = D a + D b 2w(e ab ) Complexity: O((2n 2i + 2) 2 ) 30

31 Kernighan-Lin (KL) Algorithm Complexity of the D-value update for swapping a and b D x = D x + 2w(e xa ) 2w(e xb ) D y = D y + 2w(e yb ) 2w(e ya ) For a (and b) For each cell x connected to cell a (and b) Update D x and D y Practically O(1) 31

32 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. kk 5. Find k maximizing G= ii=1 gg ii Overall: O(n 3 ) O(n) O(1) Loop. # iterations: n O((2n-2i+2) 2 ) 32

33 Kernighan-Lin (KL) Algorithm Reduce the runtime The most expensive step: gain computation (O(n 2 )) Compute the gain of each pair: g ab = D a + D b 2w(e ab ) How to expedite the process Sort the cells in the decreasing order of the D-value D a1 D a2 D a3 in partition A D b1 D b2 D b3 in partition B Keep the max. gain (g max ) found until now. When computing the gain of (D al, D bm ) If D al + D bm < g max, we don t need to compute the gain for all the pairs (D ak, D bp ) s.t. k>l and p>m. Practically, it takes O(1). Complexity: O(n*logn) for sorting. 33

34 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. kk 5. Find k maximizing G= ii=1 gg ii O(n) Overall: O(n 2 log n) O(1) Loop. # iterations: n O(n log n) 34

35 Questions Can the KL algorithm handle hyperedges? unbalanced partitions? fixed cells (e.g., cell k should be in partition A)? 35

36 Questions Intentionally left blank 36

37 Fiduccia-Mattheyses (FM) Algorithm Handles Hyperedges edge hyperedge Imbalance (unequal partition sizes) Runtime: O(n) Idea Move a cell instead of swapping two cells. 37

38 Fiduccia-Mattheyses (FM) Algorithm Algorithm Find an initial solution Improve the solution No Best? No Yes Keep the best one. No more improvement? Yes End 38

39 Fiduccia-Mattheyses (FM) Algorithm Definitions Cutstate(net) uncut: the net has all the cells in a single partition. cut: the net has cells in both the two partitions. Gain of cell: # nets by which the cutsize will decrease if the cell were to be moved. Balance criterion: To avoid having all cells migrate to one block. r V - s max A r V + s max A + B = V Max cell size Base cell: The cell selected for movement. The max-gain cell that doesn t violate the balance criterion. 39

40 Fiduccia-Mattheyses (FM) Algorithm Definitions (continued) Distribution (net): (A(n), B(n)) A(n): # cells connected to n in A B(n): # cells connected to n in B Critical net A net is critical if it has a cell that if moved will change its cutstate. cut to uncut uncut to cut The distribution of the net is (0,x), (1,x), (x,0), or (x,1). Distribution (n1) = (1, 1) Distribution (n2) = (0, 1) Distribution (n3) = (0, 2) Distribution (n4) = (2, 1) Distribution (n5) = (1, 1) Distribution (n6) = (2, 0) 40

41 Fiduccia-Mattheyses (FM) Algorithm Critical net Moving a cell connected to the net changes the cutstate of the net. A B A B A B A B 41

42 Fiduccia-Mattheyses (FM) Algorithm Algorithm 1. Gain computation Compute the gain of each cell move 2. Select a base cell (a max-gain cell) 3. Move and lock the base cell and update gain. 42

43 Fiduccia-Mattheyses (FM) Algorithm Gain computation F(c): From_block (either A or B) T(c): To_block (either B or A) gain = g(c) = FS(c) TE(c) A FS(c): P s.t. P = {n c n and dis(n) = (F(c), T(c)) = (1, x)} TE(c): P s.t. P = {n c n and dis(n) = (F(c), T(c)) = (x, 0)} B 43

44 Fiduccia-Mattheyses (FM) Algorithm Gain computation c 1 m A c 3 c 2 q k p c 4 c 5 c 6 j B gain = g(c) = FS(c) TE(c) FS(c): # nets whose dis. = (F(c), T(c)) = (1, x)} TE(c): # nets whose dis. = (F(c), T(c)) = (x, 0)} For each net n connected to c if F(n) = 1, g(c)++; if T(n) = 0, g(c)--; Distribution of the nets (A, B) m: (3, 0) q: (2, 1) k: (1, 1) p: (1, 1) j: (0, 2) Gain g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 44

45 Fiduccia-Mattheyses (FM) Algorithm Select a base (best-gain) cell. Balance factor: 0.4 S: 1 Size: 3 c 1 m S: 4 c 3 c 2 S: 2 q k p c 4 j c 5 c 6 S: 3 S: 5 Gain g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 A B S(A) = 9, S(B) = 9 Area criterion: [0.4*18-5, 0.4*18+5] = [2.2, 12.2] 45

46 Fiduccia-Mattheyses (FM) Algorithm Before move, update the gain of all the other cells. S: 1 Size: 3 c 1 m S: 4 c 3 q k c 4 c 5 j S: 3 c 2 S: 2 p c 6 S: 5 A B 46

47 Fiduccia-Mattheyses (FM) Algorithm Original code for gain update F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n Else if T(n) = 1 gain(c)--; // for c n & T F(n)--; T(n)++; If F(n) = 0 gain(c)--; // for c n Else if F(n) = 1 gain(c)++; // for c n & F 47

48 Fiduccia-Mattheyses (FM) Algorithm Gain update (reformulated) F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n Else if T(n) = 1 gain(c)--; // for c n & T If F(n) = 1 gain(c)--; // for c n Else if F(n) = 2 gain(c)++; // for c n & F F(n)--; T(n)++; 48

49 Fiduccia-Mattheyses (FM) Algorithm Gain update base cell Case 1) T(n) = 0 g: -1 g: +1 g: -1 g: -1 g: 0 g: 0 F T F T F T F T 49

50 Fiduccia-Mattheyses (FM) Algorithm Gain update base cell Case 2) T(n) = 1 g: +1 g: -1 g: 0 g: +1 g: +1 g: 0 F T F T F T F T g: 0 g: 0 g: +1 g: 0 g: 0 g: 0 F T F T 50

51 Fiduccia-Mattheyses (FM) Algorithm Instead of enumerating all the cases, we consider the following combinations. F(n) = 1 T(n) = 0 (3) (2) (4) (5) F(n) = 2 T(n) = 1 (7) (8) (1) (6) F(n) 3 T(n) 2 (9) Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) 51

52 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) 52

53 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) T(n) = 0 : g(c)++; // c n & F 53

54 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) T(n) = 1 : g(c)--; // c n & T 54

55 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) F(n) = 1 : g(c)--; // c n & T 55

56 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) F(n) = 2 : g(c)++; // c n & F 56

57 Fiduccia-Mattheyses (FM) Algorithm Gain update F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n If T(n) = 1 gain(c)--; // for c n If F(n) = 1 gain(c)--; // for c n If F(n) = 2 gain(c)++; // for c n F(n)--; T(n)++; 57

58 Fiduccia-Mattheyses (FM) Algorithm Before move, update the gain of all the other cells. Δg(c) in F Δg(c) in T F(n) T(n) S: 1 (2) Size: 3 c 1 m S: 4 c 3 q k c 4 c 5 j S: 3 (3) (4) (5) (6) c 2 (7) S: 2 p c 6 S: 5 (8) A Gain (before move) g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 B Gain (after move) g(c 1 ) = = 0 g(c 3 ) = = +1 g(c 4 ) = 0 1 = -1 g(c 5 ) = 0 2 = -2 g(c 6 ) = 1 2 = -1 58

59 Fiduccia-Mattheyses (FM) Algorithm Move and lock the base cell. S: 1 Size: 3 c 1 m S: 4 c 3 q c 2 p c 4 c 5 j k S: 2 S: 3 Gain (after move) g(c 1 ) = = 0 g(c 3 ) = = +1 g(c 4 ) = 0 1 = -1 g(c 5 ) = 0 2 = -2 g(c 6 ) = 1 2 = -1 c 6 S: 5 A B 59

60 Fiduccia-Mattheyses (FM) Algorithm Choose the next base cell (except the locked cells). Update the gain of the other cells. Move and lock the base cell. Repeat this process. Find the best move sequence. 60

61 Fiduccia-Mattheyses (FM) Algorithm Complexity analysis 1. Gain computation Compute the gain of each cell move For each net n connected to c if F(n) = 1, g(c)++; if T(n) = 0, g(c)--; Practically O(# cells or # nets) 2. Select a base cell O(1) 3. Move and lock the base cell and update gain. Practically O(1) # iterations: # cells Overall: O(n or c) 61

62 Simulated Annealing Borrowed from chemical process 62

63 Simulated Annealing Algorithm T = T 0 (initial temperature) S = S 0 (initial solution) Time = 0 repeat Call Metropolis (S, T, M); Time = Time + M; T = α T; // α: cooling rate (α < 1) M = β M; until (Time maxtime); 63

64 Simulated Annealing Algorithm Metropolis (S, T, M) // M: # iterations repeat NewS = neighbor(s); // get a new solution by perturbation Δh = cost(news) cost(s); If ((Δh < 0) or (random < e -Δh/T )) S = NewS; // accept the new solution M = M 1; until (M==0) 64

65 Simulated Annealing Cost function for partition (A, B) Imbalance(A, B) = Size(A) Size(B) Cutsize(A, B) = Σw n for n ϵ ψ Cost = W c Cutsize(A, B) + W s Imbalance(A, B) W c and W s : weighting factors Neighbor(S) Solution perturbation Example: move a free cell. 65

66 hmetis Clustering-based partitioning Coarsening (grouping) by clustering Uncoarsening and refinement for cut-size minimization 66

67 hmetis Coarsening c 5 c 5 c 3 c 6 c 4 c 7 c 3 c 6 c 4 ' c 7 c 1 c 8 c 8 c 2 67

68 hmetis Coarsening Reduces the problem size Make sub-problems smaller and easier. Better runtime Higher probability for optimality Finds circuit hierarchy 68

69 hmetis Algorithm 1. Coarsening 2. Initial solution generation - Run partitioning for the top-level clusters. 3. Uncoarsening and refinement - Flatten clusters at each level (uncoarsening). - Apply partitioning algorithms to refine the solution. 69

70 hmetis Three coarsening methods. 70

71 hmetis Re-clustering (V- and v-cycles) Different clustering gives different cutsizes. 71

72 hmetis Final results 72

73 hmetis Final results 73

EE582 Physical Design Automation of VLSI Circuits and Systems

EE582 Physical Design Automation of VLSI Circuits and Systems EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Partitioning What We Will Study Partitioning Practical examples Problem definition Deterministic

More information

EE582 Physical Design Automation of VLSI Circuits and Systems

EE582 Physical Design Automation of VLSI Circuits and Systems EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Placement Metrics for Placement Wirelength Timing Power Routing congestion 2 Wirelength Estimation

More information

UTPlaceF 3.0: A Parallelization Framework for Modern FPGA Global Placement

UTPlaceF 3.0: A Parallelization Framework for Modern FPGA Global Placement UTPlaceF 3.0: A Parallelization Framework for Modern FPGA Global Placement Wuxi Li, Meng Li, Jiajun Wang, and David Z. Pan University of Texas at Austin wuxili@utexas.edu November 14, 2017 UT DA Wuxi Li

More information

Simulated Annealing. Local Search. Cost function. Solution space

Simulated Annealing. Local Search. Cost function. Solution space Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The

More information

King Fahd University of Petroleum & Minerals College of Computer Sciences & Engineering Department of Computer Engineering.

King Fahd University of Petroleum & Minerals College of Computer Sciences & Engineering Department of Computer Engineering. King Fahd University of Petroleum & Minerals College of Computer Sciences & Engineering Department of Computer Engineering VLSI Placement Sadiq M. Sait & Habib Youssef December 1995 Placement P lacement

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

A Novel Cell Placement Algorithm for Flexible TFT Circuit with Mechanical Strain and Temperature Consideration

A Novel Cell Placement Algorithm for Flexible TFT Circuit with Mechanical Strain and Temperature Consideration A Novel Cell Placement Algorithm for Flexible TFT Circuit with Mechanical Strain and Temperature Consideration Jiun-Li Lin, Po-Hsun Wu, and Tsung-Yi Ho Department of Computer Science and Information Engineering,

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization

More information

Application of Random Walks for the Analysis of Power Grids in Modern VLSI Chips

Application of Random Walks for the Analysis of Power Grids in Modern VLSI Chips Application of Random Walks for the Analysis of Power Grids in Modern VLSI Chips A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Baktash Boghrati IN PARTIAL FULFILLMENT

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve

More information

Introduction. HFSS 3D EM Analysis S-parameter. Q3D R/L/C/G Extraction Model. magnitude [db] Frequency [GHz] S11 S21 -30

Introduction. HFSS 3D EM Analysis S-parameter. Q3D R/L/C/G Extraction Model. magnitude [db] Frequency [GHz] S11 S21 -30 ANSOFT Q3D TRANING Introduction HFSS 3D EM Analysis S-parameter Q3D R/L/C/G Extraction Model 0-5 -10 magnitude [db] -15-20 -25-30 S11 S21-35 0 1 2 3 4 5 6 7 8 9 10 Frequency [GHz] Quasi-static or full-wave

More information

Making Fast Buffer Insertion Even Faster via Approximation Techniques

Making Fast Buffer Insertion Even Faster via Approximation Techniques Making Fast Buffer Insertion Even Faster via Approximation Techniques Zhuo Li, C. N. Sze, Jiang Hu and Weiping Shi Department of Electrical Engineering Texas A&M University Charles J. Alpert IBM Austin

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

Optimal Folding Of Bit Sliced Stacks+

Optimal Folding Of Bit Sliced Stacks+ Optimal Folding Of Bit Sliced Stacks+ Doowon Paik University of Minnesota Sartaj Sahni University of Florida Abstract We develop fast polynomial time algorithms to optimally fold stacked bit sliced architectures

More information

Algorithms for Graph Partitioning on the Planted Partition Model

Algorithms for Graph Partitioning on the Planted Partition Model Algorithms for Graph Partitioning on the Planted Partition Model Anne Condon, 1,* Richard M. Karp 2, 1 Computer Sciences Department, University of Wisconsin, 1210 West Dayton St., Madison, WI 53706; e-mail:

More information

A Two-Stage Simulated Annealing Methodology

A Two-Stage Simulated Annealing Methodology A Two-Stage Simulated Annealing Methodology James M. Varanelli and James P. Cohoon Department of Computer Science University of Virginia Charlottesville, VA 22903 USA Corresponding Author: James P. Cohoon

More information

Global Placement for Quantum-dot Cellular Automata Based Circuits

Global Placement for Quantum-dot Cellular Automata Based Circuits Global Placement for Quantum-dot Cellular Automata Based Circuits Jean Nguyen, Ramprasad Ravichandran, Sung Kyu Lim, and Mike Niemier School of Electrical and Computer Engineering College of Computing

More information

Methods for finding optimal configurations

Methods for finding optimal configurations S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction

More information

Computational Boolean Algebra. Pingqiang Zhou ShanghaiTech University

Computational Boolean Algebra. Pingqiang Zhou ShanghaiTech University Computational Boolean Algebra Pingqiang Zhou ShanghaiTech University Announcements Written assignment #1 is out. Due: March 24 th, in class. Programming assignment #1 is out. Due: March 24 th, 11:59PM.

More information

Physical Design of Digital Integrated Circuits (EN0291 S40) Sherief Reda Division of Engineering, Brown University Fall 2006

Physical Design of Digital Integrated Circuits (EN0291 S40) Sherief Reda Division of Engineering, Brown University Fall 2006 Physical Design of Digital Integrated Circuits (EN0291 S40) Sherief Reda Division of Engineering, Brown University Fall 2006 1 Lecture 04: Timing Analysis Static timing analysis STA for sequential circuits

More information

Introduction to Simulated Annealing 22c:145

Introduction to Simulated Annealing 22c:145 Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 12: Graph Clustering Cho-Jui Hsieh UC Davis May 29, 2018 Graph Clustering Given a graph G = (V, E, W ) V : nodes {v 1,, v n } E: edges

More information

Multi-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA

Multi-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA Technology Independent Multi-Level Logic Optimization Prof. Kurt Keutzer Prof. Sanjit Seshia EECS University of California, Berkeley, CA Thanks to R. Rudell, S. Malik, R. Rutenbar 1 Logic Optimization

More information

ICS 252 Introduction to Computer Design

ICS 252 Introduction to Computer Design ICS 252 fall 2006 Eli Bozorgzadeh Computer Science Department-UCI References and Copyright Textbooks referred [Mic94] G. De Micheli Synthesis and Optimization of Digital Circuits McGraw-Hill, 1994. [CLR90]

More information

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2. 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2011 Recitation 8, November 3 Corrected Version & (most) solutions

More information

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions

CSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions CSE 502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions 1. Consider the following algorithm: for i := 1 to α n log e n do Pick a random j [1, n]; If a[j] = a[j + 1] or a[j] = a[j 1] then output:

More information

Floorplanning. ECE6133 Physical Design Automation of VLSI Systems

Floorplanning. ECE6133 Physical Design Automation of VLSI Systems Floorplanning ECE6133 Physical Design Automation of VLSI Systems Prof. Sung Kyu Lim School of Electrical and Computer Engineering Georgia Institute of Technology Floorplanning, Placement, and Pin Assignment

More information

Minimum Cost Flow Based Fast Placement Algorithm

Minimum Cost Flow Based Fast Placement Algorithm Graduate Theses and Dissertations Graduate College 2012 Minimum Cost Flow Based Fast Placement Algorithm Enlai Xu Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/etd

More information

Lin-Kernighan Heuristic. Simulated Annealing

Lin-Kernighan Heuristic. Simulated Annealing DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics

More information

ECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition

ECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights

More information

k 6, then which of the following statements can you assume to be

k 6, then which of the following statements can you assume to be MA-UY2314 Discrete Mathematics Quiz 9 Name: NYU Net ID: 1.1) Suppose that in a proof by strong induction, the following statement is the inductive hypothesis: For k 6, P(j) is true for any j in the range

More information

ASYMPTOTIC COMPLEXITY SEARCHING/SORTING

ASYMPTOTIC COMPLEXITY SEARCHING/SORTING Quotes about loops O! Thou hast damnable iteration and art, indeed, able to corrupt a saint. Shakespeare, Henry IV, Pt I, 1 ii Use not vain repetition, as the heathen do. Matthew V, 48 Your if is the only

More information

Complexity, P and NP

Complexity, P and NP Complexity, P and NP EECS 477 Lecture 21, 11/26/2002 Last week Lower bound arguments Information theoretic (12.2) Decision trees (sorting) Adversary arguments (12.3) Maximum of an array Graph connectivity

More information

Quantum Annealing Approaches to Graph Partitioning on the D-Wave System

Quantum Annealing Approaches to Graph Partitioning on the D-Wave System Quantum Annealing Approaches to Graph Partitioning on the D-Wave System 2017 D-Wave QUBITS Users Conference Applications 1: Optimization S. M. Mniszewski, smm@lanl.gov H. Ushijima-Mwesigwa, hayato@lanl.gov

More information

Part III: Traveling salesman problems

Part III: Traveling salesman problems Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Chapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43

Chapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43 Chapter 11 Matrix Algorithms and Graph Partitioning M. E. J. Newman June 10, 2016 M. E. J. Newman Chapter 11 June 10, 2016 1 / 43 Table of Contents 1 Eigenvalue and Eigenvector Eigenvector Centrality The

More information

A Framework for Layout-Level Logic Restructuring. Hosung Leo Kim John Lillis

A Framework for Layout-Level Logic Restructuring. Hosung Leo Kim John Lillis A Framework for Layout-Level Logic Restructuring Hosung Leo Kim John Lillis Motivation: Logical-to-Physical Disconnect Logic-level Optimization disconnect Physical-level Optimization fixed netlist Limited

More information

Notes for Lecture Notes 2

Notes for Lecture Notes 2 Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

Distributed Optimization: Analysis and Synthesis via Circuits

Distributed Optimization: Analysis and Synthesis via Circuits Distributed Optimization: Analysis and Synthesis via Circuits Stephen Boyd Prof. S. Boyd, EE364b, Stanford University Outline canonical form for distributed convex optimization circuit intepretation primal

More information

G ing the connectivity between these elements, the objective

G ing the connectivity between these elements, the objective EEE TRANSACTONS ON COMPUTER-ADED DESGN, VOL 9. NO. 2. DECEMBER 990 A Neural Network Design for Circuit Partitioning 265 Absfracf-This paper proposes a neural network model for circuit bipartitioning. The

More information

Last time: Summary. Last time: Summary

Last time: Summary. Last time: Summary 1 Last time: Summary Definition of AI? Turing Test? Intelligent Agents: Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors

More information

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 ) Algorithms Chapter 5: The Tree Searching Strategy - Examples 1 / 11 Chapter 5: The Tree Searching Strategy 1. Ex 5.1Determine the satisfiability of the following Boolean formulas by depth-first search

More information

Artificial Intelligence Heuristic Search Methods

Artificial Intelligence Heuristic Search Methods Artificial Intelligence Heuristic Search Methods Chung-Ang University, Jaesung Lee The original version of this content is created by School of Mathematics, University of Birmingham professor Sandor Zoltan

More information

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints

More information

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)

Time Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1) Time Complexity (1) CSCI 2670 Original Slides were written by Dr. Frederick W Maier Spring 2014 Time Complexity So far we ve dealt with determining whether or not a problem is decidable. But even if it

More information

Lecture 7: Dynamic Programming I: Optimal BSTs

Lecture 7: Dynamic Programming I: Optimal BSTs 5-750: Graduate Algorithms February, 06 Lecture 7: Dynamic Programming I: Optimal BSTs Lecturer: David Witmer Scribes: Ellango Jothimurugesan, Ziqiang Feng Overview The basic idea of dynamic programming

More information

Data Exploration and Unsupervised Learning with Clustering

Data Exploration and Unsupervised Learning with Clustering Data Exploration and Unsupervised Learning with Clustering Paul F Rodriguez,PhD San Diego Supercomputer Center Predictive Analytic Center of Excellence Clustering Idea Given a set of data can we find a

More information

Introduction to Complexity Theory

Introduction to Complexity Theory Introduction to Complexity Theory Read K & S Chapter 6. Most computational problems you will face your life are solvable (decidable). We have yet to address whether a problem is easy or hard. Complexity

More information

Divide-and-conquer: Order Statistics. Curs: Fall 2017

Divide-and-conquer: Order Statistics. Curs: Fall 2017 Divide-and-conquer: Order Statistics Curs: Fall 2017 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers.

More information

University of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003

University of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003 University of New Mexico Department of Computer Science Midterm Examination CS 361 Data Structures and Algorithms Spring, 2003 Name: Email: Print your name and email, neatly in the space provided above;

More information

The minimum G c cut problem

The minimum G c cut problem The minimum G c cut problem Abstract In this paper we define and study the G c -cut problem. Given a complete undirected graph G = (V ; E) with V = n, edge weighted by w(v i, v j ) 0 and an undirected

More information

CMPSCI 611: Advanced Algorithms

CMPSCI 611: Advanced Algorithms CMPSCI 611: Advanced Algorithms Lecture 12: Network Flow Part II Andrew McGregor Last Compiled: December 14, 2017 1/26 Definitions Input: Directed Graph G = (V, E) Capacities C(u, v) > 0 for (u, v) E and

More information

Mining Newsgroups Using Networks Arising From Social Behavior by Rakesh Agrawal et al. Presented by Will Lee

Mining Newsgroups Using Networks Arising From Social Behavior by Rakesh Agrawal et al. Presented by Will Lee Mining Newsgroups Using Networks Arising From Social Behavior by Rakesh Agrawal et al. Presented by Will Lee wwlee1@uiuc.edu September 28, 2004 Motivation IR on newsgroups is challenging due to lack of

More information

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a

More information

Direct Current Circuits

Direct Current Circuits Name: Date: PC1143 Physics III Direct Current Circuits 5 Laboratory Worksheet Part A: Single-Loop Circuits R 1 = I 0 = V 1 = R 2 = I 1 = V 2 = R 3 = I 2 = V 3 = R 12 = I 3 = V 12 = R 23 = V 23 = R 123

More information

Sherali-Adams Relaxations of Graph Isomorphism Polytopes. Peter N. Malkin* and Mohammed Omar + UC Davis

Sherali-Adams Relaxations of Graph Isomorphism Polytopes. Peter N. Malkin* and Mohammed Omar + UC Davis Sherali-Adams Relaxations of Graph Isomorphism Polytopes Peter N. Malkin* and Mohammed Omar + UC Davis University of Washington May 18th 2010 *Partly funded by an IBM OCR grant and the NSF. + Partly funded

More information

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14

LOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14 LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson

More information

CSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes )

CSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes ) CSE613: Parallel Programming, Spring 2012 Date: May 11 Final Exam ( 11:15 AM 1:45 PM : 150 Minutes ) This exam will account for either 10% or 20% of your overall grade depending on your relative performance

More information

Learning MN Parameters with Approximation. Sargur Srihari

Learning MN Parameters with Approximation. Sargur Srihari Learning MN Parameters with Approximation Sargur srihari@cedar.buffalo.edu 1 Topics Iterative exact learning of MN parameters Difficulty with exact methods Approximate methods Approximate Inference Belief

More information

Quantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39

Quantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39 Quantum algorithms based on quantum walks Jérémie Roland Université Libre de Bruxelles Quantum Information & Communication Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre 2012 1 / 39 Outline

More information

Bipartite Matchings. Andreas Klappenecker

Bipartite Matchings. Andreas Klappenecker Bipartite Matchings Andreas Klappenecker Matching Number m(g) = number of edges in a maximally large matching. Why is m(g) < 4? m(g) = W iff A

More information

A pruning pattern list approach to the permutation flowshop scheduling problem

A pruning pattern list approach to the permutation flowshop scheduling problem A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :

More information

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007

University of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007 University of New Mexico Department of Computer Science Final Examination CS 362 Data Structures and Algorithms Spring, 2007 Name: Email: Print your name and email, neatly in the space provided above;

More information

Signal-Path Driven Partition and Placement for Analog Circuit. Di Long, Xianlong Hong, Sheqin Dong EDA Lab, Tsinghua University

Signal-Path Driven Partition and Placement for Analog Circuit. Di Long, Xianlong Hong, Sheqin Dong EDA Lab, Tsinghua University Signal-Path Driven Partition and Placement for Analog Circuit Di Long, Xianlong Hong, Sheqin Dong EDA Lab, Tsinghua University Agenda Research Background Overview of the analog placement researches Signal-Path

More information

More on NP and Reductions

More on NP and Reductions Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data

More information

Local and Stochastic Search

Local and Stochastic Search RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction

More information

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z:

NUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z: NUMBER SYSTEMS Number theory is the study of the integers. We denote the set of integers by Z: Z = {..., 3, 2, 1, 0, 1, 2, 3,... }. The integers have two operations defined on them, addition and multiplication,

More information

1 Computational Problems

1 Computational Problems Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation

More information

Bayesian Learning in Undirected Graphical Models

Bayesian Learning in Undirected Graphical Models Bayesian Learning in Undirected Graphical Models Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK http://www.gatsby.ucl.ac.uk/ Work with: Iain Murray and Hyun-Chul

More information

Real-Time Workload Models with Efficient Analysis

Real-Time Workload Models with Efficient Analysis Real-Time Workload Models with Efficient Analysis Advanced Course, 3 Lectures, September 2014 Martin Stigge Uppsala University, Sweden Fahrplan 1 DRT Tasks in the Model Hierarchy Liu and Layland and Sporadic

More information

Speculative Parallelism in Cilk++

Speculative Parallelism in Cilk++ Speculative Parallelism in Cilk++ Ruben Perez & Gregory Malecha MIT May 11, 2010 Ruben Perez & Gregory Malecha (MIT) Speculative Parallelism in Cilk++ May 11, 2010 1 / 33 Parallelizing Embarrassingly Parallel

More information

Theory of Computation Turing Machine and Pushdown Automata

Theory of Computation Turing Machine and Pushdown Automata Theory of Computation Turing Machine and Pushdown Automata 1. What is a Turing Machine? A Turing Machine is an accepting device which accepts the languages (recursively enumerable set) generated by type

More information

Solutions. 9. Ans. C. Memory size. 11. Ans. D. a b c d e f a d e b c f a b d c e f a d b c e f a b d e c f a d b e c f. 13. Ans. D. Merge sort n log n

Solutions. 9. Ans. C. Memory size. 11. Ans. D. a b c d e f a d e b c f a b d c e f a d b c e f a b d e c f a d b e c f. 13. Ans. D. Merge sort n log n Solutions. Ans. B. ~ p q ~ r ~ s ~ p q r s ~ ~ p q r s p ~ q r s ~ p q ~ r v ~ s For x only the above compound preposition is true.. Ans. B. Case I First bit is 0 Case II First bit is 6. Ans. A. # has

More information

The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model

The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model Jan Vondrák 1 1 Dept. of Mathematics Stanford University joint work with Nick Harvey (UBC) The Lovász Local Lemma

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 6 Greedy Algorithms Interval Scheduling Interval Partitioning Scheduling to Minimize Lateness Sofya Raskhodnikova S. Raskhodnikova; based on slides by E. Demaine,

More information

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 24 CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 2.1 INTRODUCTION Polynomial factorization is a mathematical problem, which is often encountered in applied sciences and many of

More information

CSE 332. Data Abstractions

CSE 332. Data Abstractions Adam Blank Lecture 23 Autumn 2015 CSE 332 Data Abstractions CSE 332: Data Abstractions Graphs 4: Minimum Spanning Trees Final Dijkstra s Algorithm 1 1 dijkstra(g, source) { 2 dist = new Dictionary(); 3

More information

Steiner Trees in Chip Design. Jens Vygen. Hangzhou, March 2009

Steiner Trees in Chip Design. Jens Vygen. Hangzhou, March 2009 Steiner Trees in Chip Design Jens Vygen Hangzhou, March 2009 Introduction I A digital chip contains millions of gates. I Each gate produces a signal (0 or 1) once every cycle. I The output signal of a

More information

Unit 1A: Computational Complexity

Unit 1A: Computational Complexity Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if

More information

Random Stimulus Generation using Entropy and XOR Constraints

Random Stimulus Generation using Entropy and XOR Constraints Random Stimulus Generation using Entropy and XOR Constraints Stephen M. Plaza, Igor L. Markov, Valeria Bertacco EECS Department, University of Michigan, Ann Arbor, MI 48109-2121 Abstract {splaza, imarkov,

More information

Recap: Prefix Sums. Given A: set of n integers Find B: prefix sums 1 / 86

Recap: Prefix Sums. Given A: set of n integers Find B: prefix sums 1 / 86 Recap: Prefix Sums Given : set of n integers Find B: prefix sums : 3 1 1 7 2 5 9 2 4 3 3 B: 3 4 5 12 14 19 28 30 34 37 40 1 / 86 Recap: Parallel Prefix Sums Recursive algorithm Recursively computes sums

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

Recall: Matchings. Examples. K n,m, K n, Petersen graph, Q k ; graphs without perfect matching

Recall: Matchings. Examples. K n,m, K n, Petersen graph, Q k ; graphs without perfect matching Recall: Matchings A matching is a set of (non-loop) edges with no shared endpoints. The vertices incident to an edge of a matching M are saturated by M, the others are unsaturated. A perfect matching of

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:

Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: I I Select a variable to change Select a new value for that variable Until a satisfying assignment

More information

Sequential Equivalence Checking without State Space Traversal

Sequential Equivalence Checking without State Space Traversal Sequential Equivalence Checking without State Space Traversal C.A.J. van Eijk Design Automation Section, Eindhoven University of Technology P.O.Box 53, 5600 MB Eindhoven, The Netherlands e-mail: C.A.J.v.Eijk@ele.tue.nl

More information

Recursion: Introduction and Correctness

Recursion: Introduction and Correctness Recursion: Introduction and Correctness CSE21 Winter 2017, Day 7 (B00), Day 4-5 (A00) January 25, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Today s Plan From last time: intersecting sorted lists and

More information

Divide-and-Conquer Algorithms Part Two

Divide-and-Conquer Algorithms Part Two Divide-and-Conquer Algorithms Part Two Recap from Last Time Divide-and-Conquer Algorithms A divide-and-conquer algorithm is one that works as follows: (Divide) Split the input apart into multiple smaller

More information

The Fast Optimal Voltage Partitioning Algorithm For Peak Power Density Minimization

The Fast Optimal Voltage Partitioning Algorithm For Peak Power Density Minimization The Fast Optimal Voltage Partitioning Algorithm For Peak Power Density Minimization Jia Wang, Shiyan Hu Department of Electrical and Computer Engineering Michigan Technological University Houghton, Michigan

More information

HIGH-PERFORMANCE circuits consume a considerable

HIGH-PERFORMANCE circuits consume a considerable 1166 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL 17, NO 11, NOVEMBER 1998 A Matrix Synthesis Approach to Thermal Placement Chris C N Chu D F Wong Abstract In this

More information

Overlapping Communities

Overlapping Communities Overlapping Communities Davide Mottin HassoPlattner Institute Graph Mining course Winter Semester 2017 Acknowledgements Most of this lecture is taken from: http://web.stanford.edu/class/cs224w/slides GRAPH

More information

Module 7-2 Decomposition Approach

Module 7-2 Decomposition Approach Module 7-2 Decomposition Approach Chanan Singh Texas A&M University Decomposition Approach l Now we will describe a method of decomposing the state space into subsets for the purpose of calculating the

More information

A Temperature-aware Synthesis Approach for Simultaneous Delay and Leakage Optimization

A Temperature-aware Synthesis Approach for Simultaneous Delay and Leakage Optimization A Temperature-aware Synthesis Approach for Simultaneous Delay and Leakage Optimization Nathaniel A. Conos and Miodrag Potkonjak Computer Science Department University of California, Los Angeles {conos,

More information

Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs

Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs Fault Collapsing in Digital Circuits Using Fast Fault Dominance and Equivalence Analysis with SSBDDs Raimund Ubar, Lembit Jürimägi (&), Elmet Orasson, and Jaan Raik Department of Computer Engineering,

More information