Estimating the Capacity of the 2-D Hard Square Constraint Using Generalized Belief Propagation
|
|
- Maude Alexander
- 5 years ago
- Views:
Transcription
1 Estimating the Capacity of the 2-D Hard Square Constraint Using Generalized Belief Propagation Navin Kashyap (joint work with Eric Chan, Mahdi Jafari Siavoshani, Sidharth Jaggi and Pascal Vontobel, Chinese University of Hong Kong) Conference on Applied Mathematics University of Hong Kong August 23, / 41
2 1-D RLL Constraints Let 0 d < k be fixed integers (k is allowed to be ). Definition A binary sequence x 1 x 2... x n {0, 1} n satisfies the (one-dimensional) (d, k)-runlength-limited (RLL) constraint if any pair of successive 1s in the sequence is separated by at least d and at most k 0s. 2 / 41
3 1-D RLL Constraints Let 0 d < k be fixed integers (k is allowed to be ). Definition A binary sequence x 1 x 2... x n {0, 1} n satisfies the (one-dimensional) (d, k)-runlength-limited (RLL) constraint if any pair of successive 1s in the sequence is separated by at least d and at most k 0s. Codes consisting of sequences satisfying a (d, k)-rll constraint are used for writing information on magnetic and optical recording devices such as hard drives and CDs/DVDs. The maximum rate of such a code is given by the capacity of the (d, k)-rll constraint, defined as 1 C d,k = lim n n log 2 Z n (d,k) where Z n (d,k) denotes the number of binary length-n sequences satisfying the constraint. 2 / 41
4 The 1-D (1, )-RLL Constraint A binary sequence satisfies the (1, )-RLL constraint if it does not contain 1s in adjacent (i.e., consecutive) positions. Some easy facts about Z n := Z (1, ) n : Z n, n = 1, 2, 3,..., forms a Fibonacci sequence C 1, := lim Z 1 = 2, Z 2 = 3, and Z n = Z n 1 + Z n 2 for all n 3 1 n n log 1+ 2 Z n = log = / 41
5 The 2-D Hard Square Constraint Definition A binary m n array satisfies the 2-D hard square constraint (also called the 2-D (1, )-RLL constraint) if no row or column of the array contains 1s in adjacent positions. Each such array can also be viewed as an independent set in the m n grid graph. 4 / 41
6 The Hard-Square Entropy Constant Let Z m,n denote the number of such m n arrays. It can be shown (for example, using subaddivity arguments) that the limit η = lim m,n 1 mn log 2(Z m,n ) exists. This limit is called the hard-square entropy constant. Open Problem: Determine the hard-square entropy constant η. 5 / 41
7 The Hard-Square Entropy Constant Let Z m,n denote the number of such m n arrays. It can be shown (for example, using subaddivity arguments) that the limit η = lim m,n 1 mn log 2(Z m,n ) exists. This limit is called the hard-square entropy constant. Open Problem: Determine the hard-square entropy constant η. What is known: Various upper and lower bounds, resulting in the numerical estimate [Justesen & Forchhammer, 1999] η = η is computable to within an accuracy of 1 N [Pavlov, 2010] in time polynomial in N 5 / 41
8 Binary Ising Model on the 2-D Lattice m n grid with mn vertices and 2mn (m + n) edges Variable x i {0, 1} at each vertex i Pairwise function f : {0, 1} {0, 1} R + Defines a joint distribution on {0, 1} m n : p(x) = 1 f (x i, x j ) Z i j 6 / 41
9 Binary Ising Model on the 2-D Lattice m n grid with mn vertices and 2mn (m + n) edges Variable x i {0, 1} at each vertex i Pairwise function f : {0, 1} {0, 1} R + Defines a joint distribution on {0, 1} m n : p(x) = 1 f (x i, x j ) Z i j Quantity of interest: Z = x Called the partition function. f (x i, x j ) i j 6 / 41
10 Special Case: Hard-Square Model f (a, b) = 1 (a,b) (1,1) The partition function here is precisely Z m,n. 7 / 41
11 Gibbs Free Energy Define the energy of a configuration x {0, 1} m n to be E(x) = log f (x i, x j ) i j so that p(x) = 1 Z exp( E(x)). 8 / 41
12 Gibbs Free Energy Define the energy of a configuration x {0, 1} m n to be E(x) = log f (x i, x j ) i j so that p(x) = 1 Z exp( E(x)). For an arbitrary probability distribution b(x) on {0, 1} m n, define the average energy U(b) = b(x)e(x) x the entropy H(b) = x b(x) log b(x) the Gibbs free energy F (b) = U(b) H(b) 8 / 41
13 A Variational Principle log Z = min F (b) b Proof: Write F (b) = log Z + x b(x) log b(x) p(x) = log Z + D(b p) 9 / 41
14 The Bethe Free Energy Let (b i,j (x i, x j )) and (b i (x i )) be beliefs defined for all edges i j and vertices i, respectively. These must satisfy the following: the b i,j s and b i s are probability mass functions on {0, 1} 2 and {0, 1}, respectively x j b i,j (x i, x j ) = b i (x i ) (consistency) 10 / 41
15 The Bethe Free Energy Let (b i,j (x i, x j )) and (b i (x i )) be beliefs defined for all edges i j and vertices i, respectively. These must satisfy the following: the b i,j s and b i s are probability mass functions on {0, 1} 2 and {0, 1}, respectively x j b i,j (x i, x j ) = b i (x i ) (consistency) We then define the Bethe average energy U B ({b i,j }, {b i }) = b i,j (a, b) log f (a, b) i j (a,b) {0,1} 2 the Bethe entropy H B ({b i,j }, {b i }) = i j H(b i,j ) i (d i 1)H(b i ) where d i denotes the degree of the vertex i. the Bethe free energy F B ({b i,j }, {b i }) = U B ({b i,j }, {b i }) H B ({b i,j }, {b i }) 10 / 41
16 The Bethe Approximation log Z B := min F B({b i,j }, {b i }) {b i,j },{b i } 11 / 41
17 The Bethe Approximation log Z B := min F B({b i,j }, {b i }) {b i,j },{b i } Theorem (Yedidia, Freeman and Weiss, 2001) Stationary points of the Bethe free energy functional correspond to the beliefs at fixed points of the belief propagation algorithm. 11 / 41
18 Belief Propagation (The Sum-Product Algorithm) m i a m a i i i j a a Message update rules: m i a (x i ) = c N(i)\a m c i(x i ) m a i (x i ) = x j f (x i, x j )m j a (x j ) Beliefs: b i (x i ) a N(i) m a i(x i ) b a (x i, x j ) f (x i, x j )m i a (x i )m j a (x j ) (the use of indicates that the beliefs must be normalized to sum to 1) 12 / 41
19 How Good is the Bethe Approximation? For beliefs {b i,j } and {b i } at a fixed point of the BP algorithm, define Z BP ({b i,j }, {b i }) := exp ( F B ({b i,j }, {b i }) ) Theorem (Wainwright, Jaakkola and Willsky, 2003) At any fixed point of the BP algorithm, Z Z BP ({b i,j }, {b i }) = i j b i,j(x i, x j ) x i b i(x i ) d i 1 13 / 41
20 How Good is the Bethe Approximation? For beliefs {b i,j } and {b i } at a fixed point of the BP algorithm, define Z BP ({b i,j }, {b i }) := exp ( F B ({b i,j }, {b i }) ) Theorem (Wainwright, Jaakkola and Willsky, 2003) At any fixed point of the BP algorithm, Z Z BP ({b i,j }, {b i }) = i j b i,j(x i, x j ) x i b i(x i ) d i 1 Theorem (Chertkov and Chernyak, 2006) At any fixed point of the BP algorithm, Z Z BP ({b i,j }, {b i }) = 1 + a finite series of correction terms 13 / 41
21 How Good is the Bethe Approximation? Theorem (Ruozzi, 2012) For the binary Ising model considered here, Z Z B. 14 / 41
22 Numerical Results for the Hard-Square Model Recall: η = [Plot above is from Sabato and Molkaraie, 2012] 15 / 41
23 Regions and Region Graphs , 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, , 4 4 4, 5 5 5, 6 3, 4, 7, 8 4, 8 4, 5, 8, 9 5, 9 5, 6, 9, , 8 8 8, 9 9 9, , 8, 15, 11 8, 11 8, 9, 11, 12 9, 9, 10, 12 12, 16 R = R 0 R 1 R 2, where R 0 : all the 2x2 subgrids R 1 : all the non-boundary edges (intersections of regions in R 0 ) R 2 : all the non-boundary vertices (intersections of regions in R 1 ) 16 / 41
24 Region-Based Beliefs Beliefs b R (x R ) are defined for all regions R R. These must satisfy the following: each b R is a probability mass function on {0, 1} R, where R denotes the size (number of vertices) of the region R; for regions P R, we have x R\P b R (x R ) = b P (x P ) (consistency) 17 / 41
25 Region-Based Free Energy Given a set of beliefs {b R : R R}, we define for each region R R: the average energy of R U R = b R (x R ) log f (x i, x j ) x R {0,1} R i,j R:i j the entropy of b R H R = b R (x R ) log b R (x R ) x R the free energy of R F R = U R H R 18 / 41
26 Region-Based Free Energy Given a set of beliefs {b R : R R}, we define for each region R R: the average energy of R U R = b R (x R ) log f (x i, x j ) x R {0,1} R i,j R:i j the entropy of b R H R = b R (x R ) log b R (x R ) x R the free energy of R F R = U R H R The region-based free energy of the model (R, {b R }) is defined as F R ({b R }) = F R F R + F R R R 0 R R 1 R R 2 18 / 41
27 The Kikuchi Approximation log Z R = min {b R } F R({b R }) 19 / 41
28 The Kikuchi Approximation log Z R = min {b R } F R({b R }) Theorem (Yedidia, Freeman and Weiss, 2001) Stationary points of the region-based free energy functional correspond to the beliefs at fixed points of a generalized belief propagation (GBP) algorithm. 19 / 41
29 The Kikuchi Approximation log Z R = min {b R } F R({b R }) Theorem (Yedidia, Freeman and Weiss, 2001) Stationary points of the region-based free energy functional correspond to the beliefs at fixed points of a generalized belief propagation (GBP) algorithm. 19 / 41
30 GBP Message Updates (The Parent-to-Child Algorithm) 13, 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, 6 3, 4 4 4, 5 5 5, 6 3, 4, 4, 7, 8 8 4, 5, 8, 9 5, 9 5, 6, 9, 10 7, 8 8 8, 9 9 9, 10 7, 8, 8, 8, 9, 15, , 12 9, 12 9, 10, 12, 16 m (4,5,8,9) (5,9) (x 5, x 9 ) = x 4,x 8 f (x 4, x 5 )f (x 4, x 8 )f (x 8, x 9 ) (red messages) (green messages) 20 / 41
31 GBP Message Updates (The Parent-to-Child Algorithm) 13, 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, 6 3, 4 4 4, 5 5 5, 6 3, 4, 4, 7, 8 8 4, 5, 8, 9 5, 9 5, 6, 9, 10 7, 8 8 8, 9 9 9, 10 7, 8, 8, 8, 9, 15, , 12 9, 12 9, 10, 12, 16 m (4,5) (5) (x 5 ) = x 4 f (x 4, x 5 ) (red messages) 21 / 41
32 GBP Beliefs (The Parent-to-Child Algorithm) 13, 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, 6 3, 4 4 4, 5 5 5, 6 3, 4, 4, 7, 8 8 4, 5, 8, 9 5, 9 5, 6, 9, 10 7, 8 8 8, 9 9 9, 10 7, 8, 8, 8, 9, 15, , 12 9, 12 9, 10, 12, 16 b (4,5,8,9) (x 4, x 5, x 8, x 9 ) f (x 4, x 5 )f (x 4, x 8 )f (x 5, x 9 )f (x 8, x 9 ) (red messages) 22 / 41
33 GBP Beliefs (The Parent-to-Child Algorithm) 13, 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, 6 3, 4 4 4, 5 5 5, 6 3, 4, 4, 7, 8 8 4, 5, 8, 9 5, 9 5, 6, 9, 10 7, 8 8 8, 9 9 9, 10 7, 8, 8, 8, 9, 15, , 12 9, 12 9, 10, 12, 16 b (4,5) (x 4, x 5 ) f (x 4, x 5 ) (red messages) 23 / 41
34 GBP Beliefs (The Parent-to-Child Algorithm) 13, 1, 3, 4 1, 4 1, 2, 4, 5 2, 5 2, 14, 5, 6 3, 4 4 4, 5 5 5, 6 3, 4, 4, 7, 8 8 4, 5, 8, 9 5, 9 5, 6, 9, 10 7, 8 8 8, 9 9 9, 10 7, 8, 8, 8, 9, 15, , 12 9, 12 9, 10, 12, 16 b (5) (x 5 ) (red messages) 24 / 41
35 How Good is the Kikuchi Approximation? Looking at the hard-square model again... Recall: η = [Plot above is from Sabato and Molkaraie, 2012] 25 / 41
36 What We Conjecture Conjecture (Chan et al., ISIT 14) For the binary Ising model considered here, 1 mn log Z 1 mn log Z R = o(1) where o(1) is a positive term that goes to 0 as m, n. In other words, we conjecture that Z Z R = exp ( mn o(1) ). 26 / 41
37 Attacking the Conjecture: The Opening Gambit For beliefs {b R } at a fixed point of the GBP algorithm, define Z GBP ({b R }) := exp ( F R ({b R }) ) Theorem (Chan et al., ISIT 14) At a fixed point of the GBP algorithm, Z Z GBP ({b R }) = R R 0 R 2 b R (x R ) x R R 1 b R (x R ) 27 / 41
38 Attacking the Conjecture: The Opening Gambit For beliefs {b R } at a fixed point of the GBP algorithm, define Z GBP ({b R }) := exp ( F R ({b R }) ) Theorem (Chan et al., ISIT 14) At a fixed point of the GBP algorithm, Z Z GBP ({b R }) = R R 0 R 2 b R (x R ) x R R 1 b R (x R ) Compare this with Theorem (Wainwright, Jaakkola and Willsky, 2003) At any fixed point of the BP algorithm, Z Z BP ({b i,j }, {b i }) = i j b i,j(x i, x j ) x i b i(x i ) d i 1 27 / 41
39 Question to be Addressed Question For the binary Ising model considered here, is it true that the beliefs {b R } at a fixed point of GBP satisfy 1 mn log R R 0 R 2 b R (x R ) = o(1) x R R 1 b R (x R ) where o(1) is a positive term that goes to 0 as m, n? 28 / 41
40 What We Can Prove... Theorem (Chan et al., ISIT 14) For a binary Ising model of size at most 5 5, or size equal to 3 n (or n 3) we have R R 0 R 2 b R (x R ) 1 R R 1 b R (x R ) at any fixed point of GBP. Consequently, at any fixed point of GBP. x Z Z GBP ({b R }) 29 / 41
41 A Key Tool: Log-Supermodularity A function g : {0, 1} k R + is called log-supermodular if for all x, y {0, 1} k. g(x)g(y) g(x y)g(x y) 30 / 41
42 A Key Tool: Log-Supermodularity A function g : {0, 1} k R + is called log-supermodular if for all x, y {0, 1} k. g(x)g(y) g(x y)g(x y) Log-supermodularity is preserved under multiplication: g 1, g 2 log-supermodular = g 1 g 2 log-supermodular marginalization: g log-supermodular = x 1 g(x) log-supermod (this follows from the Ahlswede-Daykin four functions theorem) 30 / 41
43 Log-Supermodularity of Functions with Binary Inputs A function f : {0, 1} 2 R + is log-supermodular iff f (01)f (10) f (00)f (11) If f : {0, 1} 2 R + is not log-supermodular, then the function is log-supermodular. f (a, b) = f (a, 1 b) 31 / 41
44 A Local Transformation A binary Ising model defined by local functions f is equivalent to a binary Ising model defined by f : The partition functions are equal: Z(f ) = Z( f ) For each set of beliefs {b R }, there exists a corresponding { b R } such that R R 0 R 2 b R (x R ) = R R br 0 R 2 (x R ) x R R 1 b R (x R ) x R R br 1 (x R ) 32 / 41
45 Ising Models with Log-Supermodular Local Functions Lemma In a binary Ising model with log-supermodular local functions, the BP and GBP message update rules preserve log-supermodularity of messages. Thus, if BP and GBP are initialized with log-supermodular messages, then the messages in subsequent iterations of BP and GBP remain log-supermodular, and the BP-based beliefs b i,j, b i and the GBP-based beliefs b R are all log-supermodular. 33 / 41
46 The 2 2 Grid The Kikuchi approximation is exact: Z R = Z. 34 / 41
47 The 2 2 Grid The Kikuchi approximation is exact: Z R = Z. Theorem For the 2 2 grid, at any fixed point of BP, we have Z Z BP ({b i,j }, {b i }) = 1 + 1,2 2,3 3,4 4,1 i b, i(0)b i (1) where i,j = b i,j (00)b i,j (11) b i,j (01)b i,j (10). 34 / 41
48 Proof of 2 2 Theorem Start with Wainwright-Jaakkola-Willsky: Z Z BP ({b i,j }, {b i }) = x 1,...,x 4 = x 1,...,x 4 i j b i,j(x i, x j ) i b i(x i ) b i (x i ) i i j b i,j (x i, x j ) b i (x i )b j (x j ) Verify that where s(0) = 1 and s(1) = +1. Hence, Z Z BP ({b i,j }, {b i }) = b i,j (x i, x j ) b i (x i )b j (x j ) = 1 + s(x i)s(x j ) ij b i (x i )b j (x j ), x 1,...,x 4 b i (x i ) i i j ( 1 + s(x ) i)s(x j ) ij b i (x i )b j (x j ) 35 / 41
49 Proof (cont d) Expand out the product i j Note that and Z Z BP = x 1,...,x 4 All other terms x 1,...,x 4 x 1,...,x 4 b i (x i ) i x 1,...,x 4 ( b i (x i ) i b i (x i ) = i i i j i,j 1 + s(x i )s(x j ) ij b i (x i )b j (x j ) ( i b i(x i ) 2 = i j = i j = i j ) : i j ) i,j i b i(x i ) 2 x i b i (x i ) = 1. 1 i,j x 1,...,x 4 i,j i i,j b i (x i )( ) vanish. i i i b i(x i ) x i 1 b i (x i ) 1 b i (0)b i (1) 36 / 41
50 The 3 3 Grid Theorem For the 3 3 grid, at any fixed point of GBP, the ratio Z/Z GBP ({b R }) is given by ( (0) 1 + b 5 (0) b 51 (00)b 51 (01) ) 4 + b 5 (1)( ) 4 (1), b 51 (10)b 51 (11) where (x) = b 512 (x00)b 512 (x11) b 512 (x01)b 512 (x10). 37 / 41
51 Proof of 3 3 Theorem Start with Z Z GBP ({b R }) = Marginalize out x 6, x 7, x 8, x 9 : Z Z GBP ({b R }) = b(x 1526)b(x 1537)b(x 2548)b(x 3549)b(x 5 ) b(x x 15 )b(x 25 )b(x 35 )b(x 45 ) 1,...,x 9 b(x 152)b(x 153)b(x 254)b(x 354)b(x 5 ) b(x x 15 )b(x 25 )b(x 35 )b(x 45 ) 1,...,x 5 38 / 41
52 Proof (cont d) Now, define and write Z Z GBP({b R }) as B(x) = b(x 15)b(x 25 )b(x 35 )b(x 45 ) b(x 5 ) 3 B(x) b(x 152)b(x 5 ) b(x x 15 )b(x 25 ) b(x 153)b(x 5 ) b(x 15 )b(x 35 ) b(x 254)b(x 5 ) b(x 25 )b(x 45 ) b(x 354)b(x 5 ) b(x 35 )b(x 45 ) 1,...,x 5 Next, verify that b(x i5j )b(x 5 ) b(x i5 )b(x j5 ) = 1 + s(x i)s(x j ) 5ij (x 5 ) b(x i5 )b(x j5 ) where s(0) = 1 and s(1) = +1. Plug this back into the expression for Z Z GBP({b R }) and simplify. 39 / 41
53 Proof (cont d) Upon simplification, we obtain Z Z GBP ({b R }) = 1 + (x 5 ) 4 b(x x 5 ) 3 5 This further simplifies to 4 x 1,...,x 4 i=1 1 b(x i5 ) Z Z GBP ({b R }) = 1 + (x 5 ) 4 ( 1 b(x x 5 ) 3 b(x 5 0) + 1 b(x 5 1) 5 = 1 + ( ) 4 (x 5 ) b(x 5 ) b(x x 5 0)b(x 5 1) 5 ) 4 40 / 41
54 References [1] Chun Lam Chan, Estimating the Partition Function of Binary Pairwise Graphical Models Using Generalized Belief Propagation, M. Phil. Thesis, Dept. Information Engg., Chinese University of Hong Kong, Aug [2] C.L. Chan, M. Jafari Siavoshani, S. Jaggi, N. Kashyap, and P.O. Vontobel, Generalized Belief Propagation for Estimating the Partition Function of the 2D Ising Model, in Proc IEEE Int. Symp. Inf. Theory (ISIT 2015), Hong Kong, China, June / 41
UNDERSTANDING BELIEF PROPOGATION AND ITS GENERALIZATIONS
UNDERSTANDING BELIEF PROPOGATION AND ITS GENERALIZATIONS JONATHAN YEDIDIA, WILLIAM FREEMAN, YAIR WEISS 2001 MERL TECH REPORT Kristin Branson and Ian Fasel June 11, 2003 1. Inference Inference problems
More informationA Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation
A Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation Max Welling Dept. of Computer Science University of California, Irvine Irvine, CA 92697-3425, USA Andrew E. Gelfand Dept. of Computer
More informationLecture 18 Generalized Belief Propagation and Free Energy Approximations
Lecture 18, Generalized Belief Propagation and Free Energy Approximations 1 Lecture 18 Generalized Belief Propagation and Free Energy Approximations In this lecture we talked about graphical models and
More informationOn the Choice of Regions for Generalized Belief Propagation
UAI 2004 WELLING 585 On the Choice of Regions for Generalized Belief Propagation Max Welling (welling@ics.uci.edu) School of Information and Computer Science University of California Irvine, CA 92697-3425
More informationConcavity of reweighted Kikuchi approximation
Concavity of reweighted ikuchi approximation Po-Ling Loh Department of Statistics The Wharton School University of Pennsylvania loh@wharton.upenn.edu Andre Wibisono Computer Science Division University
More informationBeyond Log-Supermodularity: Lower Bounds and the Bethe Partition Function
Beyond Log-Supermodularity: Lower Bounds and the Bethe Partition Function Nicholas Ruozzi Communication Theory Laboratory École Polytechnique Fédérale de Lausanne Lausanne, Switzerland nicholas.ruozzi@epfl.ch
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction
More informationCSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches
CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches A presentation by Evan Ettinger November 11, 2005 Outline Introduction Motivation and Background
More informationThe Channel Capacity of One and Two-Dimensional Constrained Codes. Xuerong Yong Department of Computer Science Hong Kong UST
The Channel Capacity of One and Two-Dimensional Constrained Codes Xuerong Yong Department of Computer Science Hong Kong UST OUTLINE 1. Introduction 2. Capacity and Its Properties 3. Examples of Previous
More informationGrowth Rate of Spatially Coupled LDPC codes
Growth Rate of Spatially Coupled LDPC codes Workshop on Spatially Coupled Codes and Related Topics at Tokyo Institute of Technology 2011/2/19 Contents 1. Factor graph, Bethe approximation and belief propagation
More informationBelief Propagation on Partially Ordered Sets Robert J. McEliece California Institute of Technology
Belief Propagation on Partially Ordered Sets Robert J. McEliece California Institute of Technology +1 +1 +1 +1 {1,2,3} {1,3,4} {2,3,5} {3,4,5} {1,3} {2,3} {3,4} {3,5} -1-1 -1-1 {3} +1 International Symposium
More information12 : Variational Inference I
10-708: Probabilistic Graphical Models, Spring 2015 12 : Variational Inference I Lecturer: Eric P. Xing Scribes: Fattaneh Jabbari, Eric Lei, Evan Shapiro 1 Introduction Probabilistic inference is one of
More informationProbabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Day 4: Expectation and Belief Propagation Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/
More informationApproximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract)
Approximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract) Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles
More informationTightness of LP Relaxations for Almost Balanced Models
Tightness of LP Relaxations for Almost Balanced Models Adrian Weller University of Cambridge AISTATS May 10, 2016 Joint work with Mark Rowland and David Sontag For more information, see http://mlg.eng.cam.ac.uk/adrian/
More informationReview: Directed Models (Bayes Nets)
X Review: Directed Models (Bayes Nets) Lecture 3: Undirected Graphical Models Sam Roweis January 2, 24 Semantics: x y z if z d-separates x and y d-separation: z d-separates x from y if along every undirected
More informationA Generalized Loop Correction Method for Approximate Inference in Graphical Models
for Approximate Inference in Graphical Models Siamak Ravanbakhsh mravanba@ualberta.ca Chun-Nam Yu chunnam@ualberta.ca Russell Greiner rgreiner@ualberta.ca Department of Computing Science, University of
More information13 : Variational Inference: Loopy Belief Propagation and Mean Field
10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction
More informationMarginal Densities, Factor Graph Duality, and High-Temperature Series Expansions
Marginal Densities, Factor Graph Duality, and High-Temperature Series Expansions Mehdi Molkaraie mehdi.molkaraie@alumni.ethz.ch arxiv:90.02733v [stat.ml] 7 Jan 209 Abstract We prove that the marginals
More informationJunction Tree, BP and Variational Methods
Junction Tree, BP and Variational Methods Adrian Weller MLSALT4 Lecture Feb 21, 2018 With thanks to David Sontag (MIT) and Tony Jebara (Columbia) for use of many slides and illustrations For more information,
More informationBayesian Machine Learning - Lecture 7
Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1
More informationFast Memory-Efficient Generalized Belief Propagation
Fast Memory-Efficient Generalized Belief Propagation M. Pawan Kumar P.H.S. Torr Department of Computing Oxford Brookes University Oxford, UK, OX33 1HX {pkmudigonda,philiptorr}@brookes.ac.uk http://cms.brookes.ac.uk/computervision
More information13 : Variational Inference: Loopy Belief Propagation
10-708: Probabilistic Graphical Models 10-708, Spring 2014 13 : Variational Inference: Loopy Belief Propagation Lecturer: Eric P. Xing Scribes: Rajarshi Das, Zhengzhong Liu, Dishan Gupta 1 Introduction
More informationIntroduction to Graphical Models. Srikumar Ramalingam School of Computing University of Utah
Introduction to Graphical Models Srikumar Ramalingam School of Computing University of Utah Reference Christopher M. Bishop, Pattern Recognition and Machine Learning, Jonathan S. Yedidia, William T. Freeman,
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference IV: Variational Principle II Junming Yin Lecture 17, March 21, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3 X 3 Reading: X 4
More informationSemidefinite relaxations for approximate inference on graphs with cycles
Semidefinite relaxations for approximate inference on graphs with cycles Martin J. Wainwright Electrical Engineering and Computer Science UC Berkeley, Berkeley, CA 94720 wainwrig@eecs.berkeley.edu Michael
More informationLinear Response Algorithms for Approximate Inference in Graphical Models
Linear Response Algorithms for Approximate Inference in Graphical Models Max Welling Department of Computer Science University of Toronto 10 King s College Road, Toronto M5S 3G4 Canada welling@cs.toronto.edu
More informationLinear Response for Approximate Inference
Linear Response for Approximate Inference Max Welling Department of Computer Science University of Toronto Toronto M5S 3G4 Canada welling@cs.utoronto.ca Yee Whye Teh Computer Science Division University
More informationThe Channel Capacity of Constrained Codes: Theory and Applications
The Channel Capacity of Constrained Codes: Theory and Applications Xuerong Yong 1 The Problem and Motivation The primary purpose of coding theory channel capacity; Shannon capacity. Only when we transmit
More informationConcavity of reweighted Kikuchi approximation
Concavity of reweighted Kikuchi approximation Po-Ling Loh Department of Statistics The Wharton School University of Pennsylvania loh@wharton.upenn.edu Andre Wibisono Computer Science Division University
More informationDual Decomposition for Marginal Inference
Dual Decomposition for Marginal Inference Justin Domke Rochester Institute of echnology Rochester, NY 14623 Abstract We present a dual decomposition approach to the treereweighted belief propagation objective.
More information17 Variational Inference
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 17 Variational Inference Prompted by loopy graphs for which exact
More informationIntroduction to Graphical Models. Srikumar Ramalingam School of Computing University of Utah
Introduction to Graphical Models Srikumar Ramalingam School of Computing University of Utah Reference Christopher M. Bishop, Pattern Recognition and Machine Learning, Jonathan S. Yedidia, William T. Freeman,
More informationWalk-Sum Interpretation and Analysis of Gaussian Belief Propagation
Walk-Sum Interpretation and Analysis of Gaussian Belief Propagation Jason K. Johnson, Dmitry M. Malioutov and Alan S. Willsky Department of Electrical Engineering and Computer Science Massachusetts Institute
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3
More informationMAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches
MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches Martin Wainwright Tommi Jaakkola Alan Willsky martinw@eecs.berkeley.edu tommi@ai.mit.edu willsky@mit.edu
More informationPseudocodewords from Bethe Permanents
Pseudocodewords from Bethe Permanents Roxana Smarandache Departments of Mathematics and Electrical Engineering University of Notre Dame Notre Dame, IN 46556 USA rsmarand@ndedu Abstract It was recently
More informationConvexifying the Bethe Free Energy
402 MESHI ET AL. Convexifying the Bethe Free Energy Ofer Meshi Ariel Jaimovich Amir Globerson Nir Friedman School of Computer Science and Engineering Hebrew University, Jerusalem, Israel 91904 meshi,arielj,gamir,nir@cs.huji.ac.il
More informationFractional Belief Propagation
Fractional Belief Propagation im iegerinck and Tom Heskes S, niversity of ijmegen Geert Grooteplein 21, 6525 EZ, ijmegen, the etherlands wimw,tom @snn.kun.nl Abstract e consider loopy belief propagation
More informationDoes Better Inference mean Better Learning?
Does Better Inference mean Better Learning? Andrew E. Gelfand, Rina Dechter & Alexander Ihler Department of Computer Science University of California, Irvine {agelfand,dechter,ihler}@ics.uci.edu Abstract
More informationAn Importance Sampling Algorithm for Models with Weak Couplings
An Importance ampling Algorithm for Models with Weak Couplings Mehdi Molkaraie EH Zurich mehdi.molkaraie@alumni.ethz.ch arxiv:1607.00866v1 [cs.i] 4 Jul 2016 Abstract We propose an importance sampling algorithm
More informationExact MAP estimates by (hyper)tree agreement
Exact MAP estimates by (hyper)tree agreement Martin J. Wainwright, Department of EECS, UC Berkeley, Berkeley, CA 94720 martinw@eecs.berkeley.edu Tommi S. Jaakkola and Alan S. Willsky, Department of EECS,
More informationOn the Convergence of the Convex Relaxation Method and Distributed Optimization of Bethe Free Energy
On the Convergence of the Convex Relaxation Method and Distributed Optimization of Bethe Free Energy Ming Su Department of Electrical Engineering and Department of Statistics University of Washington,
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by
More informationBeyond Junction Tree: High Tree-Width Models
Beyond Junction Tree: High Tree-Width Models Tony Jebara Columbia University February 25, 2015 Joint work with A. Weller, N. Ruozzi and K. Tang Data as graphs Want to perform inference on large networks...
More informationFactor Graphs and Message Passing Algorithms Part 1: Introduction
Factor Graphs and Message Passing Algorithms Part 1: Introduction Hans-Andrea Loeliger December 2007 1 The Two Basic Problems 1. Marginalization: Compute f k (x k ) f(x 1,..., x n ) x 1,..., x n except
More informationGraphical Model Inference with Perfect Graphs
Graphical Model Inference with Perfect Graphs Tony Jebara Columbia University July 25, 2013 joint work with Adrian Weller Graphical models and Markov random fields We depict a graphical model G as a bipartite
More informationStructured Variational Inference
Structured Variational Inference Sargur srihari@cedar.buffalo.edu 1 Topics 1. Structured Variational Approximations 1. The Mean Field Approximation 1. The Mean Field Energy 2. Maximizing the energy functional:
More informationMin-Max Message Passing and Local Consistency in Constraint Networks
Min-Max Message Passing and Local Consistency in Constraint Networks Hong Xu, T. K. Satish Kumar, and Sven Koenig University of Southern California, Los Angeles, CA 90089, USA hongx@usc.edu tkskwork@gmail.com
More informationProbabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013
School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Junming Yin Lecture 15, March 4, 2013 Reading: W & J Book Chapters 1 Roadmap Two
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 295-P, Spring 213 Prof. Erik Sudderth Lecture 11: Inference & Learning Overview, Gaussian Graphical Models Some figures courtesy Michael Jordan s draft
More informationProbabilistic Graphical Models Lecture Notes Fall 2009
Probabilistic Graphical Models Lecture Notes Fall 2009 October 28, 2009 Byoung-Tak Zhang School of omputer Science and Engineering & ognitive Science, Brain Science, and Bioinformatics Seoul National University
More informationThe Generalized Distributive Law and Free Energy Minimization
The Generalized Distributive Law and Free Energy Minimization Srinivas M. Aji Robert J. McEliece Rainfinity, Inc. Department of Electrical Engineering 87 N. Raymond Ave. Suite 200 California Institute
More informationUndirected Graphical Models: Markov Random Fields
Undirected Graphical Models: Markov Random Fields 40-956 Advanced Topics in AI: Probabilistic Graphical Models Sharif University of Technology Soleymani Spring 2015 Markov Random Field Structure: undirected
More informationLINK scheduling algorithms based on CSMA have received
Efficient CSMA using Regional Free Energy Approximations Peruru Subrahmanya Swamy, Venkata Pavan Kumar Bellam, Radha Krishna Ganti, and Krishna Jagannathan arxiv:.v [cs.ni] Feb Abstract CSMA Carrier Sense
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 6, March 7, 2013 David Sontag (NYU) Graphical Models Lecture 6, March 7, 2013 1 / 25 Today s lecture 1 Dual decomposition 2 MAP inference
More informationLearning MN Parameters with Approximation. Sargur Srihari
Learning MN Parameters with Approximation Sargur srihari@cedar.buffalo.edu 1 Topics Iterative exact learning of MN parameters Difficulty with exact methods Approximate methods Approximate Inference Belief
More informationXVI International Congress on Mathematical Physics
Aug 2009 XVI International Congress on Mathematical Physics Underlying geometry: finite graph G=(V,E ). Set of possible configurations: V (assignments of +/- spins to the vertices). Probability of a configuration
More informationCS Lecture 19. Exponential Families & Expectation Propagation
CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces
More informationUC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory
UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics EECS 281A / STAT 241A Statistical Learning Theory Solutions to Problem Set 2 Fall 2011 Issued: Wednesday,
More informationExtending Monte Carlo Methods to Factor Graphs with Negative and Complex Factors
ITW 202 Extending Monte Carlo Methods to Factor Graphs with Negative and Complex Factors Mehdi Molkaraie and Hans-Andrea Loeliger ETH Zurich Dept. of Information Technology & Electrical Engineering 8092
More informationCS281A/Stat241A Lecture 19
CS281A/Stat241A Lecture 19 p. 1/4 CS281A/Stat241A Lecture 19 Junction Tree Algorithm Peter Bartlett CS281A/Stat241A Lecture 19 p. 2/4 Announcements My office hours: Tuesday Nov 3 (today), 1-2pm, in 723
More informationMessage passing and approximate message passing
Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x
More informationLearning unbelievable marginal probabilities
Learning unbelievable marginal probabilities Xaq Pitkow Department of Brain and Cognitive Science University of Rochester Rochester, NY 14607 xaq@post.harvard.edu Yashar Ahmadian Center for Theoretical
More informationMachine Learning for Data Science (CS4786) Lecture 24
Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each
More informationA Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles
A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles Jinwoo Shin Department of Electrical Engineering Korea Advanced Institute of Science and Technology Daejeon,
More informationTightening Fractional Covering Upper Bounds on the Partition Function for High-Order Region Graphs
Tightening Fractional Covering Upper Bounds on the Partition Function for High-Order Region Graphs Tamir Hazan TTI Chicago Chicago, IL 60637 Jian Peng TTI Chicago Chicago, IL 60637 Amnon Shashua The Hebrew
More informationLower Bounds on the Graphical Complexity of Finite-Length LDPC Codes
Lower Bounds on the Graphical Complexity of Finite-Length LDPC Codes Igal Sason Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel 2009 IEEE International
More informationDiscrete Markov Random Fields the Inference story. Pradeep Ravikumar
Discrete Markov Random Fields the Inference story Pradeep Ravikumar Graphical Models, The History How to model stochastic processes of the world? I want to model the world, and I like graphs... 2 Mid to
More informationExpectation Consistent Free Energies for Approximate Inference
Expectation Consistent Free Energies for Approximate Inference Manfred Opper ISIS School of Electronics and Computer Science University of Southampton SO17 1BJ, United Kingdom mo@ecs.soton.ac.uk Ole Winther
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2014 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Yu-Hsin Kuo, Amos Ng 1 Introduction Last lecture
More informationSingle-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes
Single-Gaussian Messages and Noise Thresholds for Low-Density Lattice Codes Brian M. Kurkoski, Kazuhiko Yamaguchi and Kingo Kobayashi kurkoski@ice.uec.ac.jp Dept. of Information and Communications Engineering
More informationApproximate Message Passing
Approximate Message Passing Mohammad Emtiyaz Khan CS, UBC February 8, 2012 Abstract In this note, I summarize Sections 5.1 and 5.2 of Arian Maleki s PhD thesis. 1 Notation We denote scalars by small letters
More informationarxiv: v2 [cs.ds] 1 Jan 2018
Maximum Weight Matching using Odd-sized Cycles: Max-Product Belief Propagation and Half-Integrality Sungsoo Ahn, Michael Chertkov, Andrew E. Gelfand, Sejun Park and Jinwoo Shin arxiv:1306.1167v2 [cs.ds]
More informationConstructions of Optimal Cyclic (r, δ) Locally Repairable Codes
Constructions of Optimal Cyclic (r, δ) Locally Repairable Codes Bin Chen, Shu-Tao Xia, Jie Hao, and Fang-Wei Fu Member, IEEE 1 arxiv:160901136v1 [csit] 5 Sep 016 Abstract A code is said to be a r-local
More informationDiscrete geometric analysis of message passing algorithm on graphs
Discrete geometric analysis of message passing algorithm on graphs Yusuke Watanabe DOCTOR OF PHILOSOPHY Department of Statistical Science School of Multidisciplinary Science The Graduate University for
More informationRevisiting the Limits of MAP Inference by MWSS on Perfect Graphs
Revisiting the Limits of MAP Inference by MWSS on Perfect Graphs Adrian Weller University of Cambridge CP 2015 Cork, Ireland Slides and full paper at http://mlg.eng.cam.ac.uk/adrian/ 1 / 21 Motivation:
More informationOn the number of circuits in random graphs. Guilhem Semerjian. [ joint work with Enzo Marinari and Rémi Monasson ]
On the number of circuits in random graphs Guilhem Semerjian [ joint work with Enzo Marinari and Rémi Monasson ] [ Europhys. Lett. 73, 8 (2006) ] and [ cond-mat/0603657 ] Orsay 13-04-2006 Outline of the
More informationExpectation Propagation in Factor Graphs: A Tutorial
DRAFT: Version 0.1, 28 October 2005. Do not distribute. Expectation Propagation in Factor Graphs: A Tutorial Charles Sutton October 28, 2005 Abstract Expectation propagation is an important variational
More informationLecture 12: Binocular Stereo and Belief Propagation
Lecture 12: Binocular Stereo and Belief Propagation A.L. Yuille March 11, 2012 1 Introduction Binocular Stereo is the process of estimating three-dimensional shape (stereo) from two eyes (binocular) or
More informationA Gaussian Tree Approximation for Integer Least-Squares
A Gaussian Tree Approximation for Integer Least-Squares Jacob Goldberger School of Engineering Bar-Ilan University goldbej@eng.biu.ac.il Amir Leshem School of Engineering Bar-Ilan University leshema@eng.biu.ac.il
More informationChalmers Publication Library
Chalmers Publication Library Copyright Notice 20 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for
More informationQuadratic Programming Relaxations for Metric Labeling and Markov Random Field MAP Estimation
Quadratic Programming Relaations for Metric Labeling and Markov Random Field MAP Estimation Pradeep Ravikumar John Lafferty School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213,
More informationEvery SOMA(n 2, n) is Trojan
Every SOMA(n 2, n) is Trojan John Arhin 1 Marlboro College, PO Box A, 2582 South Road, Marlboro, Vermont, 05344, USA. Abstract A SOMA(k, n) is an n n array A each of whose entries is a k-subset of a knset
More informationCodes on Graphs, Normal Realizations, and Partition Functions
Codes on Graphs, Normal Realizations, and Partition Functions G. David Forney, Jr. 1 Workshop on Counting, Inference and Optimization Princeton, NJ November 3, 2011 1 Joint work with: Heide Gluesing-Luerssen,
More informationUnderstanding the Bethe Approximation: When and How can it go Wrong?
Understanding the Bethe Approximation: When and How can it go Wrong? Adrian Weller Columbia University New York NY 7 adrian@cs.columbia.edu Kui Tang Columbia University New York NY 7 kt38@cs.columbia.edu
More informationVariational algorithms for marginal MAP
Variational algorithms for marginal MAP Alexander Ihler UC Irvine CIOG Workshop November 2011 Variational algorithms for marginal MAP Alexander Ihler UC Irvine CIOG Workshop November 2011 Work with Qiang
More informationGraphical models and message-passing Part II: Marginals and likelihoods
Graphical models and message-passing Part II: Marginals and likelihoods Martin Wainwright UC Berkeley Departments of Statistics, and EECS Tutorial materials (slides, monograph, lecture notes) available
More informationProbabilistic Graphical Models
2016 Robert Nowak Probabilistic Graphical Models 1 Introduction We have focused mainly on linear models for signals, in particular the subspace model x = Uθ, where U is a n k matrix and θ R k is a vector
More informationConnecting Belief Propagation with Maximum Likelihood Detection
Connecting Belief Propagation with Maximum Likelihood Detection John MacLaren Walsh 1, Phillip Allan Regalia 2 1 School of Electrical Computer Engineering, Cornell University, Ithaca, NY. jmw56@cornell.edu
More informationDual Decomposition for Inference
Dual Decomposition for Inference Yunshu Liu ASPITRG Research Group 2014-05-06 References: [1]. D. Sontag, A. Globerson and T. Jaakkola, Introduction to Dual Decomposition for Inference, Optimization for
More informationMaximum Weight Matching via Max-Product Belief Propagation
Maximum Weight Matching via Max-Product Belief Propagation Mohsen Bayati Department of EE Stanford University Stanford, CA 94305 Email: bayati@stanford.edu Devavrat Shah Departments of EECS & ESD MIT Boston,
More informationMessage-Passing Algorithms for GMRFs and Non-Linear Optimization
Message-Passing Algorithms for GMRFs and Non-Linear Optimization Jason Johnson Joint Work with Dmitry Malioutov, Venkat Chandrasekaran and Alan Willsky Stochastic Systems Group, MIT NIPS Workshop: Approximate
More informationMAP Examples. Sargur Srihari
MAP Examples Sargur srihari@cedar.buffalo.edu 1 Potts Model CRF for OCR Topics Image segmentation based on energy minimization 2 Examples of MAP Many interesting examples of MAP inference are instances
More informationBELIEF PROPAGATION, MEAN-FIELD, AND BETHE APPROXIMATIONS Alan Yuille, Dept. Statistics, UCLA,
1 BELIEF PROPAGATION, MEAN-FIELD, AND BETHE APPROXIMATIONS Alan Yuille, Dept. Statistics, UCLA, yuille@stat.ucla.edu I. SECTION DRAFT This chapter describes methods for estimating the marginals and maximum
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationMarkov Random Fields
Markov Random Fields Umamahesh Srinivas ipal Group Meeting February 25, 2011 Outline 1 Basic graph-theoretic concepts 2 Markov chain 3 Markov random field (MRF) 4 Gauss-Markov random field (GMRF), and
More information3 : Representation of Undirected GM
10-708: Probabilistic Graphical Models 10-708, Spring 2016 3 : Representation of Undirected GM Lecturer: Eric P. Xing Scribes: Longqi Cai, Man-Chia Chang 1 MRF vs BN There are two types of graphical models:
More informationMarkov Chains and MCMC
Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time
More informationUndirected graphical models
Undirected graphical models Semantics of probabilistic models over undirected graphs Parameters of undirected models Example applications COMP-652 and ECSE-608, February 16, 2017 1 Undirected graphical
More information