EE582 Physical Design Automation of VLSI Circuits and Systems
|
|
- Rachel Peters
- 5 years ago
- Views:
Transcription
1 EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Partitioning
2 What We Will Study Partitioning Practical examples Problem definition Deterministic algorithms Kernighan-Lin (KL) Fiduccia-Mattheyses (FM) h-metis Stochastic algorithms Simulated-annealing 2
3 Example Source: 3
4 VLSI Circuits # interconnections Intra-module: many Inter-module: few 4
5 Problem Definition Given A set of cells: T = {c 1, c 2,, c n }. W =n. A set of edges (netlist): R = {e 1, e 2,, e m }. R =m. Cell size: s(c i ) Edge weight: w(e j ) # partitions: k (k-way partitioning). P = {P 1,, P k } Minimum partition size: b s(p i ) Balancing factor: max(s(p i )) min(s(p j )) B Graph representation: edges / hyper-edges Find k partitions P = {P 1,, P k } Minimize Cut size: e(u 1,,ud) p(ui) p(uj) w(e) 5
6 Problem Definition A set of cells T = {c 1, c 2,, c n }. W =n c 1 e 1 c 5 e 5 e c 3 3 c e 2 2 c 4 c 6 e 4 e 6 c 7 c 8 6
7 Problem Definition A set of edges (netlist, connectivity) R = {e 1, e 2,, e m }. R =m c 1 e 1 c 5 e 5 e c 3 3 c e 2 2 c 4 c 6 e 4 e 6 c 7 c 8 7
8 k-way Partitioning k=2, P i =4 c 1 c 5 c 1 c 5 c 7 c 7 c 3 c 3 c 2 c 6 c 8 c 2 c 6 c 8 c 4 c 4 P 1 = {c 1, c 2, c 3, c 4 } P 1 = {c 1, c 2, c 3, c 5 } P 2 = {c 5, c 6, c 7, c 8 } P 2 = {c 4, c 6, c 7, c 8 } Cut size = 3 Cut size = 3 8
9 Kernighan-Lin (KL) Algorithm Problem definition Given A set of vertices (cell list): V = {c 1, c 2,, c 2n }. T =2n. A set of two-pin edges (netlist): E = {e 1, e 2,, e m }. E =m. Weight of each edge: w(e j ) Vertex size: s(c i ) = 1 Constraints # partitions: 2 (two-way partitioning). P = {A, B} Balanced partitioning: A = B = n Minimize Cutsize 9
10 Kernighan-Lin (KL) Algorithm Cost function: cutsize = e ψ w(e) Ψ: cut set = {e 2, e 4, e 5 } Cutsize = w(e 2 ) + w(e 4 ) + w(e 5 ) c 1 e 1 e 2 e 3 c 4 c 2 c 5 e 4 c 3 e 5 c 6 A={c 1, c 2, c 3 } B={c 4, c 5, c 6 } 10
11 Kernighan-Lin (KL) Algorithm Algorithm Find an initial solution Improve the solution No Best? No Yes Keep the best one. No more improvement? Yes End 11
12 Kernighan-Lin (KL) Algorithm Iterative improvement A B A* B* X Y Y X Current Optimal How can we find X and Y? 12
13 Kernighan-Lin (KL) Algorithm Iterative improvement Find a pair of vertices such that swapping the two vertices reduces the cutsize. c a c b c b c a Cutsize = 4 Cutsize = 3 13
14 Kernighan-Lin (KL) Algorithm Gain computation External cost (a) = E a = w(eav) for all v B = # external edges (if w(e) = 1) Internal cost (a) = I a = w(eav) for all v A = # internal edges (if w(e) = 1) D-value (a) = D a = E a I a Gain = g ab = D a + D b 2w(e ab ) g ab = {(E a w(e ab )) I a }+ {(E b w(e ab )) I b } = D a + D b 2w(e ab ) A E a = 4 I a = 2 D a = 2 a b E b = 1 I b = 0 D b = 1 B 14
15 Kernighan-Lin (KL) Algorithm Find a best-gain pair among all the gate pairs c 1 c 2 c 5 c 3 c 6 c 4 E a I a D a c c c c c c g 12 = = -2 g 13 = = 0 g 14 = = +2 g 52 = = -1 g 53 = = -1 g 54 = = -1 g 62 = = -1 g 63 = = -1 g 64 = = -1 A B g ab = D a + D b 2w(e ab ) Cutsize = 3 15
16 Kernighan-Lin (KL) Algorithm Swap and Lock After swapping, we lock the swapped cells. The locked cells will not be moved further. c 4 c 2 E a I a D a c c 5 c 6 A c 3 c 1 B c c c c c Cutsize = 1 16
17 Kernighan-Lin (KL) Algorithm Update of the D-value Update the D-value of the cells affected by the move. D x = E x I x = {(E x w(e xb )) + w(e xb )} {(I x w(e xa ) + w(e xa ))} D x = E x I x = {E x w(e xb ) + w(e xa )} {I x + w(e xb ) w(e xa )} = (E x I x ) + 2w(e xa ) 2w(e xb ) = D x + 2w(e xa ) 2w(e xb ) D y = (E y I y ) + 2w(e yb ) 2w(e ya ) = D y + 2w(e yb ) 2w(e ya ) x c a y c b 17
18 Kernighan-Lin (KL) Algorithm Update c 1 c 2 c 5 c 3 c 6 c 4 A B D a D a c 1 1 c = -1 c = -1 c 4 1 c = -2 c = -2 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 18
19 Kernighan-Lin (KL) Algorithm Gain computation and pair selection c 4 c 2 D a c 5 c 3 c 1 c 2-1 c 3-1 g 52 = = -3 g 53 = = -3 g 62 = = -3 g 63 = = -3 c 6 c 1 c 4 c 5-2 c 6-2 A B g ab = D a + D b 2w(e ab ) Cutsize = 1 19
20 Kernighan-Lin (KL) Algorithm Swap and update c 4 c 2 D a D a c 5 c 6 A c 3 c 1 B c 1 c = +1 c 3-1 c 4 c = 0 c 6-2 Cutsize = 1 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 20
21 Kernighan-Lin (KL) Algorithm Swap and update c 4 c 2 D a c 1 c 2 +1 c 5 c 6 c 3 c 3 c 1 c 4 c 5 0 A B c 6 Cutsize = 4 D x = D x + 2*w(e xa ) 2*w(e xb ) D y = D y + 2*w(e yb ) 2*w(e ya ) 21
22 Kernighan-Lin (KL) Algorithm Gain computation c 4 c 2 D a c 1 c 2 +1 g 52 = = +1 c 5 c 6 c 3 c 3 c 1 c 4 c 5 0 A B c 6 g ab = D a + D b 2w(e ab ) Cutsize = 4 22
23 Kernighan-Lin (KL) Algorithm Swap c 4 c 5 c 2 c 6 c 3 c1 A B Cutsize = 3 23
24 Kernighan-Lin (KL) Algorithm Cutsize Initial: 3 g 1 = +2 After 1 st swap: 1 g 2 = -3 After 2 nd swap: 4 g 3 = +1 After 3 rd swap: 3 c 4 c 2 c 5 c 3 c 6 c 1 A B Cutsize = 1 24
25 Kernighan-Lin (KL) Algorithm Algorithm (a single iteration) 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. k 5. Find k maximizing G= i=1 g i 25
26 Kernighan-Lin (KL) Algorithm Algorithm (overall) 1. Run a single iteration. 2. Get the best partitioning result in the iteration. 3. Unlock all the cells. 4. Re-start the iteration. Use the best partitioning result for the initial partitioning. Stop criteria Max. # iterations Max. runtime Δ Cutsize between the two consecutive iterations. 26
27 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V O(n) queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. k 5. Find k maximizing G= i=1 g i 27
28 Kernighan-Lin (KL) Algorithm Complexity of the D-value computation External cost (a) = E a = w(eav) for all v B Internal cost (a) = I a = w(eav) for all v A D-value (a) = D a = E a I a For each cell (node) a For each net connected to cell a Compute E a and I a Practically O(n) E a = 4 I a = 2 D a = 2 a b E b = 1 I b = 0 D b = 1 A B 28
29 Kernighan-Lin (KL) Algorithm Complexity of the gain computation g ab = D a + D b 2w(e ab ) For each pair (a, b) g ab = D a + D b 2w(e ab ) Complexity: O((n-i) 2 ) 29
30 Kernighan-Lin (KL) Algorithm Complexity of the D-value D x = D x + 2w(e xa ) 2w(e xb ) D y = D y + 2w(e yb ) 2w(e ya ) For a (and b) For each cell x connected to cell a (and b) Update D x and D y Practically O(1) 30
31 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. O(1) O(n) 5. Find k maximizing G= i=1 k g i O(n) Loop. # iterations: n O((n-i) 2 ) O(n 3 ) 31
32 Kernighan-Lin (KL) Algorithm Reduce the runtime The most expensive step: gain computation (O(n 2 )) Compute the gain of each pair: g ab = D a + D b 2w(e ab ) How to expedite the process Sort the cells in the decreasing order of the D-value D a1 D a2 D a3 D b1 D b2 D b3 Keep the max. gain (g max ) found until now. When computing the gain of (D al, D bm ) If D al + D bm < g max, we don t need to compute the gain for all the pairs (D ak, D bp ) s.t. k>l and p>m. Practically, it takes O(1). Complexity: O(n*logn) for sorting. 32
33 Kernighan-Lin (KL) Algorithm Complexity analysis 1. V = {c 1, c 2,, c 2n } {A, B}: initial partition 2. Compute D v for all v V queue = {}, i = 1, A =A, B =B 3. Compute gain and choose the best-gain pair (a i, b i ). queue += (a i, b i ), A = A -{a i }, B =B -{b i } 4. If A and B are empty, go to step 5. Otherwise, update D for A and B and go to step 3. O(1) O(n) 5. Find k maximizing G= i=1 k g i O(n) Loop. # iterations: n O(n log n) O(n 2 log n) 33
34 Questions Intentionally left blank 34
35 Fiduccia-Mattheyses (FM) Algorithm Handles Hyperedges edge hyperedge Imbalance (unequal partition sizes) Runtime: O(n) 35
36 Fiduccia-Mattheyses (FM) Algorithm Definitions Cutstate(net) uncut: the net has all the cells in a single partition. cut: the net has cells in both the two partitions. Gain of cell: # nets by which the cutsize will decrease if the cell were to be moved. Balance criterion: To avoid having all cells migrate to one block. Max cell size r V - s max A r V + s max A + B = V Base cell: The cell selected for movement. The max-gain cell that doesn t violate the balance criterion. 36
37 Fiduccia-Mattheyses (FM) Algorithm Definitions (continued) Distribution (net): (A(n), B(n)) A(n): # cells connected to n in A B(n): # cells connected to n in B Critical net A net is critical if it has a cell that if moved will change its cutstate. cut to uncut uncut to cut Either the distribution of the net is (0,x), (1,x), (x,0), or (x,1). 37
38 Fiduccia-Mattheyses (FM) Algorithm Critical net Moving a cell connected to the net changes the cutstate of the net. A B A B A B A B 38
39 Fiduccia-Mattheyses (FM) Algorithm Algorithm 1. Gain computation Compute the gain of each cell move 2. Select a base cell 3. Move and lock the base cell and update gain. 39
40 Fiduccia-Mattheyses (FM) Algorithm Gain computation F(c): From_block (either A or B) T(c): To_block (either B or A) gain = g(c) = FS(c) TE(c) A B FS(c): P s.t. P = {n c n and dis(n) = (F(c), T(c)) = (1, x)} TE(c): P s.t. P = {n c n and dis(n) = (F(c), T(c)) = (x, 0)} 40
41 Fiduccia-Mattheyses (FM) Algorithm Gain computation c 1 m A c 3 c 2 q k p c 4 c 5 c 6 j B gain = g(c) = FS(c) TE(c) FS(c): # nets whose dis. = (F(c), T(c)) = (1, x)} TE(c): # nets whose dis. = (F(c), T(c)) = (x, 0)} For each net n connected to c if F(n) = 1, g(c)++; if T(n) = 0, g(c)--; Distribution of the nets (A, B) m: (3, 0) q: (2, 1) k: (1, 1) p: (1, 1) j: (0, 2) Gain g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 41
42 Fiduccia-Mattheyses (FM) Algorithm Select a base (best-gain) cell. Balance factor: 0.4 S: 1 Size: 3 c 1 m S: 4 c 3 c 2 S: 2 q k p c 4 j c 5 c 6 S: 3 S: 5 Gain g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 A B S(A) = 9, S(B) = 9 Area criterion: [0.4*18-5, 0.4*18+5] = [2.2, 12.2] 42
43 Fiduccia-Mattheyses (FM) Algorithm Before move, update the gain of other cells. S: 1 Size: 3 c 1 m S: 4 c 3 q k c 4 c 5 j S: 3 c 2 S: 2 p c 6 S: 5 A B 43
44 Fiduccia-Mattheyses (FM) Algorithm Original code for gain update F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n Else if T(n) = 1 gain(c)--; // for c n & T F(n)--; T(n)++; If F(n) = 0 gain(c)--; // for c n Else if F(n) = 1 gain(c)++; // for c n & F 44
45 Fiduccia-Mattheyses (FM) Algorithm Gain update F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n Else if T(n) = 1 gain(c)--; // for c n & T If F(n) = 1 gain(c)--; // for c n Else if F(n) = 2 gain(c)++; // for c n & F F(n)--; T(n)++; 45
46 Fiduccia-Mattheyses (FM) Algorithm Gain update base cell Case 1) T(n) = 0 g: -1 g: +1 g: -1 g: -1 g: 0 g: 0 F T F T F T F T 46
47 Fiduccia-Mattheyses (FM) Algorithm Gain update base cell Case 2) T(n) = 1 g: +1 g: -1 g: 0 g: +1 g: +1 g: 0 F T F T F T F T g: 0 g: 0 g: +1 g: 0 g: 0 g: 0 F T F T 47
48 Fiduccia-Mattheyses (FM) Algorithm Instead of enumerating all the cases, we consider the following combinations. F(n) = 1 T(n) = 0 (3) (2) (4) (5) F(n) = 2 T(n) = 1 (7) (8) (1) (6) F(n) 3 T(n) 2 (9) Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) 48
49 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) 49
50 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) T(n) = 0 : g(c)++; // c n & F 50
51 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) T(n) = 1 : g(c)--; // c n & T 51
52 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) F(n) = 1 : g(c)--; // c n & T 52
53 Fiduccia-Mattheyses (FM) Algorithm Conversion from the table to a source code. Δg(c) in F Δg(c) in T F(n) T(n) (1) (2) (3) (4) (5) (6) (7) (8) (9) F(n) = 2 : g(c)++; // c n & F 53
54 Fiduccia-Mattheyses (FM) Algorithm Gain update F: From_block of the base cell T: To_block of the base cell For each net n connected to the base cell If T(n) = 0 gain(c)++; // for c n If T(n) = 1 gain(c)--; // for c n If F(n) = 1 gain(c)--; // for c n If F(n) = 2 gain(c)++; // for c n F(n)--; T(n)++; 54
55 Fiduccia-Mattheyses (FM) Algorithm Before move, update the gain of other cells. Δg(c) in F Δg(c) in T F(n) T(n) S: 1 (2) Size: 3 c 1 m S: 4 c 3 q k c 4 c 5 j S: 3 (3) (4) (5) (6) c 2 (7) S: 2 p c 6 S: 5 (8) A Gain (before move) g(c 1 ) = 0 1 = -1 g(c 2 ) = 2 1 = +1 g(c 3 ) = 0 1 = -1 g(c 4 ) = 1 1 = 0 g(c 5 ) = 1 1 = 0 g(c 6 ) = 1 0 = +1 B Gain (after move) g(c 1 ) = = 0 g(c 3 ) = = +1 g(c 4 ) = 0 1 = -1 g(c 5 ) = 0 2 = -2 g(c 6 ) = 1 2 = -1 55
56 Fiduccia-Mattheyses (FM) Algorithm Move and lock the base cell. S: 1 Size: 3 c 1 m S: 4 c 3 q c 2 p c 4 c 5 j k S: 2 S: 3 Gain (after move) g(c 1 ) = = 0 g(c 3 ) = = +1 g(c 4 ) = 0 1 = -1 g(c 5 ) = 0 2 = -2 g(c 6 ) = 1 2 = -1 c 6 S: 5 A B 56
57 Fiduccia-Mattheyses (FM) Algorithm Choose the next base cell (except the locked cells). Update the gain of the other cells. Move and lock the base cell. Repeat this process. Find the best move sequence. 57
58 Fiduccia-Mattheyses (FM) Algorithm Complexity analysis 1. Gain computation Compute the gain of each cell move For each net n connected to c if F(n) = 1, g(c)++; if T(n) = 0, g(c)--; Practically O(# cells or # nets) 2. Select a base cell O(1) 3. Move and lock the base cell and update gain. Practically O(1) # iterations: # cells Total complexity: O(n or c) 58
59 Simulated Annealing Borrowed from chemical process 59
60 Simulated Annealing Algorithm T = T 0 (initial temperature) S = S 0 (initial solution) Time = 0 repeat Call Metropolis (S, T, M); Time = Time + M; T = α T; // α: cooling rate (α < 1) M = β M; until (Time maxtime); 60
61 Simulated Annealing Algorithm Metropolis (S, T, M) // M: # iterations repeat NewS = neighbor(s); // get a new solution by perturbation Δh = cost(news) cost(s); If ((Δh < 0) or (random < e -Δh/T )) S = NewS; // accept the new solution M = M 1; until (M==0) 61
62 Simulated Annealing Cost function for partition (A, B) Imbalance(A, B) = Size(A) Size(B) Cutsize(A, B) = Σw n for n ϵ ψ Cost = W c Cutsize(A, B) + W s Imbalance(A, B) W c and W s : weighting factors Neighbor(S) Solution perturbation Example: move a free cell. 62
63 hmetis Clustering-based partitioning Coarsening (grouping) by clustering Uncoarsening and refinement for cut-size minimization 63
64 hmetis Coarsening c 5 c 5 c 3 c 6 c 4 c 7 c 3 c 6 c 4 ' c 7 c 1 c 8 c 8 c 2 64
65 hmetis Coarsening Reduces the problem size Make sub-problems smaller and easier. Better runtime Higher probability for optimality Finds circuit hierarchy 65
66 hmetis Algorithm 1. Coarsening 2. Initial solution generation - Run partitioning for the top-level clusters. 3. Uncoarsening and refinement - Flatten clusters at each level (uncoarsening). - Apply partitioning algorithms to refine the solution. 66
67 hmetis Three coarsening methods. 67
68 hmetis Re-clustering (V- and v-cycles) Different clustering gives different cutsizes. 68
69 hmetis Final results 69
70 hmetis Final results 70
EE582 Physical Design Automation of VLSI Circuits and Systems
EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Partitioning What We Will Study Partitioning Practical examples Problem definition Deterministic
More informationEE582 Physical Design Automation of VLSI Circuits and Systems
EE582 Prof. Dae Hyun Kim School of Electrical Engineering and Computer Science Washington State University Placement Metrics for Placement Wirelength Timing Power Routing congestion 2 Wirelength Estimation
More informationSimulated Annealing. Local Search. Cost function. Solution space
Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos
More information5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References
More informationKing Fahd University of Petroleum & Minerals College of Computer Sciences & Engineering Department of Computer Engineering.
King Fahd University of Petroleum & Minerals College of Computer Sciences & Engineering Department of Computer Engineering VLSI Placement Sadiq M. Sait & Habib Youssef December 1995 Placement P lacement
More informationCS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash
CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness
More informationZebo Peng Embedded Systems Laboratory IDA, Linköping University
TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic
More informationUTPlaceF 3.0: A Parallelization Framework for Modern FPGA Global Placement
UTPlaceF 3.0: A Parallelization Framework for Modern FPGA Global Placement Wuxi Li, Meng Li, Jiajun Wang, and David Z. Pan University of Texas at Austin wuxili@utexas.edu November 14, 2017 UT DA Wuxi Li
More informationA Novel Cell Placement Algorithm for Flexible TFT Circuit with Mechanical Strain and Temperature Consideration
A Novel Cell Placement Algorithm for Flexible TFT Circuit with Mechanical Strain and Temperature Consideration Jiun-Li Lin, Po-Hsun Wu, and Tsung-Yi Ho Department of Computer Science and Information Engineering,
More informationDesign and Analysis of Algorithms
CSE 0, Winter 08 Design and Analysis of Algorithms Lecture 8: Consolidation # (DP, Greed, NP-C, Flow) Class URL: http://vlsicad.ucsd.edu/courses/cse0-w8/ Followup on IGO, Annealing Iterative Global Optimization
More information12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria
Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve
More informationSIMU L TED ATED ANNEA L NG ING
SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic
More informationIntroduction to Simulated Annealing 22c:145
Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this
More informationAlgorithms for Graph Partitioning on the Planted Partition Model
Algorithms for Graph Partitioning on the Planted Partition Model Anne Condon, 1,* Richard M. Karp 2, 1 Computer Sciences Department, University of Wisconsin, 1210 West Dayton St., Madison, WI 53706; e-mail:
More informationA Two-Stage Simulated Annealing Methodology
A Two-Stage Simulated Annealing Methodology James M. Varanelli and James P. Cohoon Department of Computer Science University of Virginia Charlottesville, VA 22903 USA Corresponding Author: James P. Cohoon
More informationICS 252 Introduction to Computer Design
ICS 252 fall 2006 Eli Bozorgzadeh Computer Science Department-UCI References and Copyright Textbooks referred [Mic94] G. De Micheli Synthesis and Optimization of Digital Circuits McGraw-Hill, 1994. [CLR90]
More information) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.
1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2011 Recitation 8, November 3 Corrected Version & (most) solutions
More informationCSE 4502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions
CSE 502/5717 Big Data Analytics Spring 2018; Homework 1 Solutions 1. Consider the following algorithm: for i := 1 to α n log e n do Pick a random j [1, n]; If a[j] = a[j + 1] or a[j] = a[j 1] then output:
More informationComputational Boolean Algebra. Pingqiang Zhou ShanghaiTech University
Computational Boolean Algebra Pingqiang Zhou ShanghaiTech University Announcements Written assignment #1 is out. Due: March 24 th, in class. Programming assignment #1 is out. Due: March 24 th, 11:59PM.
More informationk 6, then which of the following statements can you assume to be
MA-UY2314 Discrete Mathematics Quiz 9 Name: NYU Net ID: 1.1) Suppose that in a proof by strong induction, the following statement is the inductive hypothesis: For k 6, P(j) is true for any j in the range
More informationLin-Kernighan Heuristic. Simulated Annealing
DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics
More informationASYMPTOTIC COMPLEXITY SEARCHING/SORTING
Quotes about loops O! Thou hast damnable iteration and art, indeed, able to corrupt a saint. Shakespeare, Henry IV, Pt I, 1 ii Use not vain repetition, as the heathen do. Matthew V, 48 Your if is the only
More informationComplexity, P and NP
Complexity, P and NP EECS 477 Lecture 21, 11/26/2002 Last week Lower bound arguments Information theoretic (12.2) Decision trees (sorting) Adversary arguments (12.3) Maximum of an array Graph connectivity
More informationApplication of Random Walks for the Analysis of Power Grids in Modern VLSI Chips
Application of Random Walks for the Analysis of Power Grids in Modern VLSI Chips A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Baktash Boghrati IN PARTIAL FULFILLMENT
More informationChapter 11. Matrix Algorithms and Graph Partitioning. M. E. J. Newman. June 10, M. E. J. Newman Chapter 11 June 10, / 43
Chapter 11 Matrix Algorithms and Graph Partitioning M. E. J. Newman June 10, 2016 M. E. J. Newman Chapter 11 June 10, 2016 1 / 43 Table of Contents 1 Eigenvalue and Eigenvector Eigenvector Centrality The
More informationPhysical Design of Digital Integrated Circuits (EN0291 S40) Sherief Reda Division of Engineering, Brown University Fall 2006
Physical Design of Digital Integrated Circuits (EN0291 S40) Sherief Reda Division of Engineering, Brown University Fall 2006 1 Lecture 04: Timing Analysis Static timing analysis STA for sequential circuits
More informationNotes for Lecture Notes 2
Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More information(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )
Algorithms Chapter 5: The Tree Searching Strategy - Examples 1 / 11 Chapter 5: The Tree Searching Strategy 1. Ex 5.1Determine the satisfiability of the following Boolean formulas by depth-first search
More informationMethods for finding optimal configurations
S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction
More information5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini
5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints
More informationTime Complexity (1) CSCI Spring Original Slides were written by Dr. Frederick W Maier. CSCI 2670 Time Complexity (1)
Time Complexity (1) CSCI 2670 Original Slides were written by Dr. Frederick W Maier Spring 2014 Time Complexity So far we ve dealt with determining whether or not a problem is decidable. But even if it
More informationArtificial Intelligence Heuristic Search Methods
Artificial Intelligence Heuristic Search Methods Chung-Ang University, Jaesung Lee The original version of this content is created by School of Mathematics, University of Birmingham professor Sandor Zoltan
More informationOptimal Folding Of Bit Sliced Stacks+
Optimal Folding Of Bit Sliced Stacks+ Doowon Paik University of Minnesota Sartaj Sahni University of Florida Abstract We develop fast polynomial time algorithms to optimally fold stacked bit sliced architectures
More informationData Exploration and Unsupervised Learning with Clustering
Data Exploration and Unsupervised Learning with Clustering Paul F Rodriguez,PhD San Diego Supercomputer Center Predictive Analytic Center of Excellence Clustering Idea Given a set of data can we find a
More informationDivide-and-conquer: Order Statistics. Curs: Fall 2017
Divide-and-conquer: Order Statistics Curs: Fall 2017 The divide-and-conquer strategy. 1. Break the problem into smaller subproblems, 2. recursively solve each problem, 3. appropriately combine their answers.
More informationUniversity of New Mexico Department of Computer Science. Midterm Examination. CS 361 Data Structures and Algorithms Spring, 2003
University of New Mexico Department of Computer Science Midterm Examination CS 361 Data Structures and Algorithms Spring, 2003 Name: Email: Print your name and email, neatly in the space provided above;
More informationGlobal Placement for Quantum-dot Cellular Automata Based Circuits
Global Placement for Quantum-dot Cellular Automata Based Circuits Jean Nguyen, Ramprasad Ravichandran, Sung Kyu Lim, and Mike Niemier School of Electrical and Computer Engineering College of Computing
More informationQuantum Annealing Approaches to Graph Partitioning on the D-Wave System
Quantum Annealing Approaches to Graph Partitioning on the D-Wave System 2017 D-Wave QUBITS Users Conference Applications 1: Optimization S. M. Mniszewski, smm@lanl.gov H. Ushijima-Mwesigwa, hayato@lanl.gov
More informationCMPSCI 611: Advanced Algorithms
CMPSCI 611: Advanced Algorithms Lecture 12: Network Flow Part II Andrew McGregor Last Compiled: December 14, 2017 1/26 Definitions Input: Directed Graph G = (V, E) Capacities C(u, v) > 0 for (u, v) E and
More informationSherali-Adams Relaxations of Graph Isomorphism Polytopes. Peter N. Malkin* and Mohammed Omar + UC Davis
Sherali-Adams Relaxations of Graph Isomorphism Polytopes Peter N. Malkin* and Mohammed Omar + UC Davis University of Washington May 18th 2010 *Partly funded by an IBM OCR grant and the NSF. + Partly funded
More informationPart III: Traveling salesman problems
Transportation Logistics Part III: Traveling salesman problems c R.F. Hartl, S.N. Parragh 1/282 Motivation Motivation Why do we study the TSP? c R.F. Hartl, S.N. Parragh 2/282 Motivation Motivation Why
More informationCSE613: Parallel Programming, Spring 2012 Date: May 11. Final Exam. ( 11:15 AM 1:45 PM : 150 Minutes )
CSE613: Parallel Programming, Spring 2012 Date: May 11 Final Exam ( 11:15 AM 1:45 PM : 150 Minutes ) This exam will account for either 10% or 20% of your overall grade depending on your relative performance
More informationLOCAL SEARCH. Today. Reading AIMA Chapter , Goals Local search algorithms. Introduce adversarial search 1/31/14
LOCAL SEARCH Today Reading AIMA Chapter 4.1-4.2, 5.1-5.2 Goals Local search algorithms n hill-climbing search n simulated annealing n local beam search n genetic algorithms n gradient descent and Newton-Rhapson
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 12: Graph Clustering Cho-Jui Hsieh UC Davis May 29, 2018 Graph Clustering Given a graph G = (V, E, W ) V : nodes {v 1,, v n } E: edges
More informationBipartite Matchings. Andreas Klappenecker
Bipartite Matchings Andreas Klappenecker Matching Number m(g) = number of edges in a maximally large matching. Why is m(g) < 4? m(g) = W iff A
More informationQuantum algorithms based on quantum walks. Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre / 39
Quantum algorithms based on quantum walks Jérémie Roland Université Libre de Bruxelles Quantum Information & Communication Jérémie Roland (ULB - QuIC) Quantum walks Grenoble, Novembre 2012 1 / 39 Outline
More informationCHAPTER 3 LOW DENSITY PARITY CHECK CODES
62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach
More informationFloorplanning. ECE6133 Physical Design Automation of VLSI Systems
Floorplanning ECE6133 Physical Design Automation of VLSI Systems Prof. Sung Kyu Lim School of Electrical and Computer Engineering Georgia Institute of Technology Floorplanning, Placement, and Pin Assignment
More informationUniversity of New Mexico Department of Computer Science. Final Examination. CS 362 Data Structures and Algorithms Spring, 2007
University of New Mexico Department of Computer Science Final Examination CS 362 Data Structures and Algorithms Spring, 2007 Name: Email: Print your name and email, neatly in the space provided above;
More informationMore on NP and Reductions
Indian Institute of Information Technology Design and Manufacturing, Kancheepuram Chennai 600 127, India An Autonomous Institute under MHRD, Govt of India http://www.iiitdm.ac.in COM 501 Advanced Data
More information1 Computational Problems
Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationLocal and Stochastic Search
RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction
More informationMulti-Level Logic Optimization. Technology Independent. Thanks to R. Rudell, S. Malik, R. Rutenbar. University of California, Berkeley, CA
Technology Independent Multi-Level Logic Optimization Prof. Kurt Keutzer Prof. Sanjit Seshia EECS University of California, Berkeley, CA Thanks to R. Rudell, S. Malik, R. Rutenbar 1 Logic Optimization
More informationTheory of Computation Turing Machine and Pushdown Automata
Theory of Computation Turing Machine and Pushdown Automata 1. What is a Turing Machine? A Turing Machine is an accepting device which accepts the languages (recursively enumerable set) generated by type
More informationReal-Time Workload Models with Efficient Analysis
Real-Time Workload Models with Efficient Analysis Advanced Course, 3 Lectures, September 2014 Martin Stigge Uppsala University, Sweden Fahrplan 1 DRT Tasks in the Model Hierarchy Liu and Layland and Sporadic
More informationSolutions. 9. Ans. C. Memory size. 11. Ans. D. a b c d e f a d e b c f a b d c e f a d b c e f a b d e c f a d b e c f. 13. Ans. D. Merge sort n log n
Solutions. Ans. B. ~ p q ~ r ~ s ~ p q r s ~ ~ p q r s p ~ q r s ~ p q ~ r v ~ s For x only the above compound preposition is true.. Ans. B. Case I First bit is 0 Case II First bit is 6. Ans. A. # has
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 6 Greedy Algorithms Interval Scheduling Interval Partitioning Scheduling to Minimize Lateness Sofya Raskhodnikova S. Raskhodnikova; based on slides by E. Demaine,
More informationThe Lovász Local Lemma: constructive aspects, stronger variants and the hard core model
The Lovász Local Lemma: constructive aspects, stronger variants and the hard core model Jan Vondrák 1 1 Dept. of Mathematics Stanford University joint work with Nick Harvey (UBC) The Lovász Local Lemma
More informationCHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL
24 CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 2.1 INTRODUCTION Polynomial factorization is a mathematical problem, which is often encountered in applied sciences and many of
More informationSpeculative Parallelism in Cilk++
Speculative Parallelism in Cilk++ Ruben Perez & Gregory Malecha MIT May 11, 2010 Ruben Perez & Gregory Malecha (MIT) Speculative Parallelism in Cilk++ May 11, 2010 1 / 33 Parallelizing Embarrassingly Parallel
More informationRecap: Prefix Sums. Given A: set of n integers Find B: prefix sums 1 / 86
Recap: Prefix Sums Given : set of n integers Find B: prefix sums : 3 1 1 7 2 5 9 2 4 3 3 B: 3 4 5 12 14 19 28 30 34 37 40 1 / 86 Recap: Parallel Prefix Sums Recursive algorithm Recursively computes sums
More informationMethods for finding optimal configurations
CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:
More informationRecall: Matchings. Examples. K n,m, K n, Petersen graph, Q k ; graphs without perfect matching
Recall: Matchings A matching is a set of (non-loop) edges with no shared endpoints. The vertices incident to an edge of a matching M are saturated by M, the others are unsaturated. A perfect matching of
More informationRecursion: Introduction and Correctness
Recursion: Introduction and Correctness CSE21 Winter 2017, Day 7 (B00), Day 4-5 (A00) January 25, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Today s Plan From last time: intersecting sorted lists and
More informationUpdating PageRank. Amy Langville Carl Meyer
Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software
More informationSequential Equivalence Checking without State Space Traversal
Sequential Equivalence Checking without State Space Traversal C.A.J. van Eijk Design Automation Section, Eindhoven University of Technology P.O.Box 53, 5600 MB Eindhoven, The Netherlands e-mail: C.A.J.v.Eijk@ele.tue.nl
More informationIntroduction to Complexity Theory
Introduction to Complexity Theory Read K & S Chapter 6. Most computational problems you will face your life are solvable (decidable). We have yet to address whether a problem is easy or hard. Complexity
More informationLocal Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:
Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: I I Select a variable to change Select a new value for that variable Until a satisfying assignment
More informationThe minimum G c cut problem
The minimum G c cut problem Abstract In this paper we define and study the G c -cut problem. Given a complete undirected graph G = (V ; E) with V = n, edge weighted by w(v i, v j ) 0 and an undirected
More informationDivide-and-Conquer Algorithms Part Two
Divide-and-Conquer Algorithms Part Two Recap from Last Time Divide-and-Conquer Algorithms A divide-and-conquer algorithm is one that works as follows: (Divide) Split the input apart into multiple smaller
More informationDistributed Optimization: Analysis and Synthesis via Circuits
Distributed Optimization: Analysis and Synthesis via Circuits Stephen Boyd Prof. S. Boyd, EE364b, Stanford University Outline canonical form for distributed convex optimization circuit intepretation primal
More informationLecture 7: Dynamic Programming I: Optimal BSTs
5-750: Graduate Algorithms February, 06 Lecture 7: Dynamic Programming I: Optimal BSTs Lecturer: David Witmer Scribes: Ellango Jothimurugesan, Ziqiang Feng Overview The basic idea of dynamic programming
More informationModule 7-2 Decomposition Approach
Module 7-2 Decomposition Approach Chanan Singh Texas A&M University Decomposition Approach l Now we will describe a method of decomposing the state space into subsets for the purpose of calculating the
More informationTime Analysis of Sorting and Searching Algorithms
Time Analysis of Sorting and Searching Algorithms CSE21 Winter 2017, Day 5 (B00), Day 3 (A00) January 20, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 Big-O : How to determine? Is f(n) big-o of g(n)?
More informationAlgorithms. NP -Complete Problems. Dong Kyue Kim Hanyang University
Algorithms NP -Complete Problems Dong Kyue Kim Hanyang University dqkim@hanyang.ac.kr The Class P Definition 13.2 Polynomially bounded An algorithm is said to be polynomially bounded if its worst-case
More informationLect4: Exact Sampling Techniques and MCMC Convergence Analysis
Lect4: Exact Sampling Techniques and MCMC Convergence Analysis. Exact sampling. Convergence analysis of MCMC. First-hit time analysis for MCMC--ways to analyze the proposals. Outline of the Module Definitions
More informationChapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems
Chapter 4 Beyond Classical Search 4.1 Local search algorithms and optimization problems CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline
More informationMinimum Cost Flow Based Fast Placement Algorithm
Graduate Theses and Dissertations Graduate College 2012 Minimum Cost Flow Based Fast Placement Algorithm Enlai Xu Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/etd
More informationCSCE 750 Final Exam Answer Key Wednesday December 7, 2005
CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note
More informationECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition
ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights
More informationComplexity. Complexity Theory Lecture 3. Decidability and Complexity. Complexity Classes
Complexity Theory 1 Complexity Theory 2 Complexity Theory Lecture 3 Complexity For any function f : IN IN, we say that a language L is in TIME(f(n)) if there is a machine M = (Q, Σ, s, δ), such that: L
More informationCS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C.
CS 273 Prof. Serafim Batzoglou Prof. Jean-Claude Latombe Spring 2006 Lecture 12 : Energy maintenance (1) Lecturer: Prof. J.C. Latombe Scribe: Neda Nategh How do you update the energy function during the
More informationOptimization Methods via Simulation
Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein
More informationA pruning pattern list approach to the permutation flowshop scheduling problem
A pruning pattern list approach to the permutation flowshop scheduling problem Takeshi Yamada NTT Communication Science Laboratories, 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN E-mail :
More informationQ520: Answers to the Homework on Hopfield Networks. 1. For each of the following, answer true or false with an explanation:
Q50: Answers to the Homework on Hopfield Networks 1. For each of the following, answer true or false with an explanation: a. Fix a Hopfield net. If o and o are neighboring observation patterns then Φ(
More informationNUMBER SYSTEMS. Number theory is the study of the integers. We denote the set of integers by Z:
NUMBER SYSTEMS Number theory is the study of the integers. We denote the set of integers by Z: Z = {..., 3, 2, 1, 0, 1, 2, 3,... }. The integers have two operations defined on them, addition and multiplication,
More informationLearning MN Parameters with Approximation. Sargur Srihari
Learning MN Parameters with Approximation Sargur srihari@cedar.buffalo.edu 1 Topics Iterative exact learning of MN parameters Difficulty with exact methods Approximate methods Approximate Inference Belief
More informationCMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)
More informationMetaheuristics and Local Search
Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :
More informationCSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms
(2017F) Lecture4: Analysis of Algorithms Daijin Kim CSE, POSTECH dkim@postech.ac.kr Running Time Most algorithms transform input objects into output objects. The running time of an algorithm typically
More informationAbility to Count Messages Is Worth Θ( ) Rounds in Distributed Computing
Ability to Count Messages Is Worth Θ( ) Rounds in Distributed Computing Tuomo Lempiäinen Aalto University, Finland LICS 06 July 7, 06 @ New York / 0 Outline Introduction to distributed computing Different
More informationLast time: Summary. Last time: Summary
1 Last time: Summary Definition of AI? Turing Test? Intelligent Agents: Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors
More informationPart I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS
Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a
More informationInference in Bayesian Networks
Andrea Passerini passerini@disi.unitn.it Machine Learning Inference in graphical models Description Assume we have evidence e on the state of a subset of variables E in the model (i.e. Bayesian Network)
More informationAlgorithms and Programming I. Lecture#1 Spring 2015
Algorithms and Programming I Lecture#1 Spring 2015 CS 61002 Algorithms and Programming I Instructor : Maha Ali Allouzi Office: 272 MSB Office Hours: T TH 2:30:3:30 PM Email: mallouzi@kent.edu The Course
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm
More informationBayesian Learning in Undirected Graphical Models
Bayesian Learning in Undirected Graphical Models Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London, UK http://www.gatsby.ucl.ac.uk/ Work with: Iain Murray and Hyun-Chul
More informationMarkov Chain Monte Carlo The Metropolis-Hastings Algorithm
Markov Chain Monte Carlo The Metropolis-Hastings Algorithm Anthony Trubiano April 11th, 2018 1 Introduction Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from a probability
More informationCapacity Releasing Diffusion for Speed and Locality
000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 054
More information