p(x) p(x Z) = y p(y X, Z) = αp(x Y, Z)p(Y Z)
|
|
- Isaac Whitehead
- 5 years ago
- Views:
Transcription
1 Graphical Models Foundations of Data Analysis Torsten Möller Möller/Mori 1
2 Reading Chapter 8 Pattern Recognition and Machine Learning by Bishop some slides from Russell and Norvig AIMA2e Möller/Mori 2
3 Outline Probabilistic Models Bayesian Networks Markov Random Fields Inference Möller/Mori 3
4 Probabilistic Models We now turn our focus to probabilistic models for pattern recognition Probabilities express beliefs about uncertain events, useful for decision making, combining sources of information Key quantity in probabilistic reasoning is the joint distribution p(x 1, x 2,..., x K ) where x 1 to x K are all variables in model Address two problems Inference: answering queries given the joint distribution Learning: deciding what the joint distribution is (involves inference) All inference and learning problems involve manipulations of the joint distribution Möller/Mori 4
5 Bayes rule: Reminder - Three Tricks p(y X) = p(x Y )p(y ) p(x) Marginalization: p(x) = p(x, Y = y) or p(x) = y = αp(x Y )p(y ) p(x, Y = y)dy Product rule: p(x, Y ) = p(x)p(y X) All 3 work with extra conditioning, e.g.: p(x Z) = y p(x, Y = y Z) p(y X, Z) = αp(x Y, Z)p(Y Z) Möller/Mori 5
6 L L Probabilistic Models Bayesian Networks Markov Random Fields Inference Joint Distribution cavity Lcavity toothache catch catch Ltoothache catch catch Consider model with 3 boolean random variables: cavity, catch, toothache Can answer query such as p( cavity toothache) Möller/Mori 6
7 L L Probabilistic Models Bayesian Networks Markov Random Fields Inference Joint Distribution cavity Lcavity toothache catch catch Ltoothache catch catch Consider model with 3 boolean random variables: cavity, catch, toothache Can answer query such as p( cavity toothache) = p( cavity, toothache) p(toothache) p( cavity toothache) = = 0.4 Möller/Mori 7
8 Joint Distribution In general, to answer a query on random variables Q = Q 1,..., Q N given evidence E = e, E = E 1,..., E M, e = e 1,..., e M : p(q E = e) = = p(q, E = e) p(e = e) h p(q, E = e, H = h) p(q = q, E = e, H = h) q,h Möller/Mori 8
9 Problems The joint distribution is large e. g. with K boolean random variables, 2 K entries Inference is slow, previous summations take O(2 K ) time Learning is difficult, data for 2 K parameters Analogous problems for continuous random variables Möller/Mori 9
10 Reminder - Independence Cavity Toothache Catch Weather decomposes into Cavity Toothache Catch Weather A and B are independent iff p(a B) = p(a) or p(b A) = p(b) or p(a, B) = p(a)p(b) p(t oothache, Catch, Cavity, W eather) = p(t oothache, Catch, Cavity)p(W eather) 32 entries reduced to 12 (W eather takes one of 4 values) Absolute independence powerful but rare Dentistry is a large field with hundreds of variables, none of which are independent. What to do? Möller/Mori 10
11 Reminder - Conditional Independence p(t oothache, Cavity, Catch) has = 7 independent entries If I have a cavity, the probability that the probe catches in it doesn t depend on whether I have a toothache: (1) P (catch toothache, cavity) = P (catch cavity) The same independence holds if I haven t got a cavity: (2) P (catch toothache, cavity) = P (catch cavity) Catch is conditionally independent of T oothache given Cavity: p(catch T oothache, Cavity) = p(catch Cavity) Equivalent statements: p(t oothache Catch, Cavity) = p(t oothache Cavity) p(t oothache, Catch Cavity) = p(t oothache Cavity)p(Catch Cavity) T oothache Catch Cavity Möller/Mori 11
12 Conditional Independence contd. Write out full joint distribution using chain rule: p(t oothache, Catch, Cavity) = p(t oothache Catch, Cavity)p(Catch, Cavity) = p(t oothache Catch, Cavity)p(Catch Cavity)p(Cavity) = p(t oothache Cavity)p(Catch Cavity)p(Cavity) = 5 independent numbers In many cases, the use of conditional independence greatly reduces the size of the representation of the joint distribution Möller/Mori 12
13 Graphical Models Graphical Models provide a visual depiction of probabilistic model Conditional indepence assumptions can be seen in graph Inference and learning algorithms can be expressed in terms of graph operations We will look at 2 types of graph (can be combined) Directed graphs: Bayesian networks Undirected graphs: Markov Random Fields Factor graphs (won t cover) Möller/Mori 13
14 Bayesian Networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: a set of nodes, one per variable a directed, acyclic graph (link directly influences ) a conditional distribution for each node given its parents: p(x i pa(x i )) In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over X i for each combination of parent values Möller/Mori 14
15 Example Weather Cavity Toothache Catch Topology of network encodes conditional independence assertions: W eather is independent of the other variables T oothache and Catch are conditionally independent given Cavity Möller/Mori 15
16 Example I m at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn t call. Sometimes it s set off by minor earthquakes. Is there a burglar? Variables: Burglar, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects causal knowledge: A burglar can set the alarm off An earthquake can set the alarm off The alarm can cause Mary to call The alarm can cause John to call Möller/Mori 16
17 Example contd. Burglary P(B).001 Earthquake P(E).002 B T T F F E T F T F P(A B,E) Alarm JohnCalls A T F P(J A) MaryCalls A T F P(M A) Möller/Mori 17
18 Compactness A CPT for Boolean X i with k Boolean parents has 2 k rows for the combinations of parent values Each row requires one number p for X i = true (the number for X i = false is just 1 p) B J A E M If each variable has no more than k parents, the complete network requires O(n 2 k ) numbers i.e., grows linearly with n, vs. O(2 n ) for the full joint distribution For burglary net,?? numbers = 10 numbers (vs = 31) Möller/Mori 18
19 Global Semantics Global semantics defines the full joint distribution as the product of the local conditional distributions: P (x 1,..., x n ) = n P (x i pa(x i )) i=1 B J A E M e.g., P (j m a b e) = P (j a)p (m a)p (a b, e)p ( b)p ( e) = Möller/Mori 19
20 Constructing Bayesian Networks Need a method such that a series of locally testable assertions of conditional independence guarantees the required global semantics 1. Choose an ordering of variables X 1,..., X n 2. For i = 1 to n add X i to the network select parents from X 1,..., X i 1 such that p(x i pa(x i )) = p(x i X 1,..., X i 1 ) This choice of parents guarantees the global semantics: p(x 1,..., X n ) = = n p(x i X 1,..., X i 1 ) i=1 n p(x i pa(x i )) i=1 (chain rule) (by construction) Möller/Mori 20
21 Example Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls P (J M) = P (J)? Möller/Mori 21
22 Example Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls Alarm P (J M) = P (J)? No P (A J, M) = P (A J)? P (A J, M) = P (A)? Möller/Mori 22
23 Example Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls Alarm Burglary P (J M) = P (J)? No P (A J, M) = P (A J)? P (A J, M) = P (A)? P (B A, J, M) = P (B A)? P (B A, J, M) = P (B)? No Möller/Mori 23
24 Example Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls Alarm Burglary Earthquake P (J M) = P (J)? No P (A J, M) = P (A J)? P (A J, M) = P (A)? No P (B A, J, M) = P (B A)? Yes P (B A, J, M) = P (B)? No P (E B, A, J, M) = P (E A)? P (E B, A, J, M) = P (E A, B)? Möller/Mori 24
25 Example Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls Alarm Burglary Earthquake P (J M) = P (J)? No P (A J, M) = P (A J)? P (A J, M) = P (A)? P (B A, J, M) = P (B A)? Yes P (B A, J, M) = P (B)? No P (E B, A, J, M) = P (E A)? No P (E B, A, J, M) = P (E A, B)? Yes No Möller/Mori 25
26 Example contd. MaryCalls JohnCalls Alarm Burglary Earthquake Deciding conditional independence is hard in noncausal directions (Causal models and conditional independence seem hardwired for humans!) Assessing conditional probabilities is hard in noncausal directions Network is less compact: = 13 numbers needed Möller/Mori 26
27 Example - Car Insurance Age GoodStudent RiskAversion SeniorTrain SocioEcon Mileage VehicleYear ExtraCar DrivingSkill MakeModel DrivingHist Antilock DrivQuality Airbag CarValue HomeBase AntiTheft Ruggedness Accident OwnDamage Theft Cushioning OtherCost OwnCost MedicalCost LiabilityCost PropertyCost Möller/Mori 27
28 Example - Polynomial Regression w t 1 t N Bayesian polynomial regression model Observations t = (t 1,..., t N ) Vector of coefficients w Inputs x and noise variance σ 2 were assumed fixed, not stochastic and hence not in model Joint distribution: p(t, w) = p(w) N p(t n w) n=1 Möller/Mori 28
29 Plates w w t 1 t N = t n N A shorthand for writing repeated nodes such as the t n uses plates Möller/Mori 29
30 Deterministic Model Parameters x n α σ 2 t n N w Can also include deterministic parameters (not stochastic) as small nodes Bayesian polynomial regression model: p(t, w x, α, σ 2 ) = p(w α) N p(t n w, x n, σ 2 ) n=1 Möller/Mori 30
31 Observations x n α σ 2 t n N w In polynomial regression, we assumed we had a training set of N pairs (x n, t n ) Convention is to use shaded nodes for observed random variables Möller/Mori 31
32 x n Predictions α w t n N σ 2 ˆt ˆx Suppose we wished to predict the value ˆt for a new input ˆx The Bayesian network used for this inference task would be this one Möller/Mori 32
33 Specifying Distributions - Discrete Variables Earlier we saw the use of conditional probability tables (CPT) for specifying a distribution over discrete random variables with discrete-valued parents B T T F F E T F T F Burglary P(A B,E) JohnCalls P(B).001 Alarm A T F P(J A) Earthquake MaryCalls P(E).002 A P(M A) T.70 F.01 For a variable with no parents, with K possible states: p(x µ) = K k=1 µ x k k e.g. p(b) = B B2, 1-of-K representation Möller/Mori 33
34 Specifying Distributions - Discrete Variables cont. With two variables x 1, x 2 can have two cases x 1 x 2 x 1 x 2 Dependent Independent p(x 1, x 2 µ) = p(x 1 µ)p(x 2 x 1, µ) ( K ) K K = k=1 µ x 1k k1 k=1 j=1 K 2 1 free parameters in µ µ x 1kx 2j kj2 p(x 1, x 2 µ) = p(x 1 µ)p(x 2 µ) ( K ) ( K ) = k=1 µ x 1k k1 2(K 1) free parameters in µ k=1 µ x 2k k2 Möller/Mori 34
35 Chains of Nodes x 1 x 2 x M With M nodes, could form a chain as shown above Number of parameters is: (K 1) +(M 1) K(K 1) }{{}}{{} x 1 others Compare to: K M 1 for fully connected graph M(K 1) for graph with no edges (all independent) Möller/Mori 35
36 Sharing Parameters µ 1 µ 2 µ M x 1 x 2 x M µ 1 µ x 1 x 2 x M Another way to reduce number of parameters is sharing parameters (a. k. a. tying of parameters) Lower graph reuses same µ for nodes 2-M µ is a random variable in this network, could also be deterministic (K 1) + K(K 1) parameters Möller/Mori 36
37 Möller/Mori For harvest h, mean cost 37 Probabilistic Models Bayesian Networks Markov Random Fields Inference Specifying Distributions - Continuous Variables P(c h, subsidy) Cost c Harvest h One common type of conditional distribution for continuous variables is the linear-gaussian p(x i pa i ) = N x i ; w ij x j + b i, v i j pa i e.g. With one parent Harvest: p(c h) = N (c; 0.5h + 5, 1)
38 Linear Gaussian Interesting fact: if all nodes in a Bayesian Network are linear Gaussian, joint distribution is a multivariate Gaussian p(x i pa i ) = N x i ; w ij x j + b i, v i j pa i N p(x 1,..., x N ) = N x i ; w ij x j + b i, v i i=1 j pa i Each factor looks like exp((x i w T i x pa i ) 2 ), this product will be another quadratic form With no links in graph, end up with diagonal covariance matrix With fully connected graph, end up with full covariance matrix Möller/Mori 38
39 Conditional Independence in Bayesian Networks Recall again that a and b are conditionally independent given c (a b c) if p(a b, c) = p(a c) or equivalently p(a, b c) = p(a c)p(b c) Before we stated that links in a graph are directly influences We now develop a correct notion of links, in terms of the conditional independences they represent This will be useful for general-purpose inference methods Möller/Mori 39
40 A Tale of Three Graphs - Part 1 c a b The graph above means p(a, b, c) = p(a c)p(b c)p(c) p(a, b) = p(a c)p(b c)p(c) c p(a)p(b) in general So a and b not independent Möller/Mori 40
41 A Tale of Three Graphs - Part 1 c a b However, conditioned on c p(a, b c) = p(a, b, c) p(c) = p(a c)p(b c)p(c) p(c) = p(a c)p(b c) So a b c Möller/Mori 41
42 A Tale of Three Graphs - Part 1 c c a b a b Note the path from a to b in the graph When c is not observed, path is open, a and b not independent When c is observed, path is blocked, a and b independent In this case c is tail-to-tail with respect to this path Möller/Mori 42
43 A Tale of Three Graphs - Part 2 a c b The graph above means p(a, b, c) = p(a)p(b c)p(c a) Again a and b not independent Möller/Mori 43
44 A Tale of Three Graphs - Part 2 a c b However, conditioned on c So a b c p(a, b c) = p(a, b, c) p(c) = p(a)p(b c) p(c) = p(a c)p(b c) = p(a)p(b c) p(c a) p(c) p(a c)p(c) } p(a) {{ } Bayes Rule Möller/Mori 44
45 A Tale of Three Graphs - Part 2 a c b a c b As before, the path from a to b in the graph When c is not observed, path is open, a and b not independent When c is observed, path is blocked, a and b independent In this case c is head-to-tail with respect to this path Möller/Mori 45
46 A Tale of Three Graphs - Part 3 a b The graph above means p(a, b, c) = p(a)p(b)p(c a, b) p(a, b) = p(a)p(b)p(c a, b) c = p(a)p(b) This time a and b are independent c Möller/Mori 46
47 A Tale of Three Graphs - Part 3 a b However, conditioned on c c p(a, b c) = p(a, b, c) p(a)p(b)p(c a, b) = p(c) p(c) p(a c)p(b c) in general So a b c Möller/Mori 47
48 A Tale of Three Graphs - Part 3 a b a b c Frustratingly, the behaviour here is different When c is not observed, path is blocked, a and b independent When c is observed, path is unblocked, a and b not independent In this case c is head-to-head with respect to this path Situation is in fact more complex, path is unblocked if any descendent of c is observed c Möller/Mori 48
49 Part 3 - Intuition B F B F B F G G G Binary random variables B (battery charged), F (fuel tank full), G (fuel gauge reads full) B and F independent But if we observe G = 0 (false) things change e.g. p(f = 0 G = 0, B = 0) could be less than p(f = 0 G = 0), as B = 0 explains away the fact that the gauge reads empty Recall that p(f G, B) = p(f G) is another F B G Möller/Mori 49
50 D-separation A general statement of conditional independence For sets of nodes A, B, C, check all paths from A to B in graph If all paths are blocked, then A B C Path is blocked if: Arrows meet head-to-tail or tail-to-tail at a node in C Arrows meet head-to-head at a node, and neither node nor any descendent is in C Möller/Mori 50
51 Naive Bayes z x 1 x D Commonly used naive Bayes classification model Class label z, features x 1,..., x D Model assumes features independent given class label Tail-to-tail at z, blocks path between features Möller/Mori 51
52 Learning Parameters When all random variables are observed in training data, relatively straight-forward Distribution factors, all factors observed e.g. Maximum likelihood used to set parameters of each distribution p(x i pa i ) separately When some random variables not observed, it s tricky This is a common case Expectation-maximization (later) is a method for this Möller/Mori 52
53 Conditional Independence in Graphs a b a b c Recall that for Bayesian Networks, conditional independence was a bit complicated d-separation with head-to-head links We would like to construct a graphical representation such that conditional independence is straight-forward path checking c Möller/Mori 53
54 Markov Random Fields A C B Markov random fields (MRFs) contain one node per variable Undirected graph over these nodes Conditional independence will be given by simple separation, blockage by observing a node on a path e.g. in above graph, A B C Möller/Mori 54
55 MRF Factorization Remember that graphical models define a factorization of the joint distribution What should be the factorization so that we end up with the simple conditional independence check? For x i and x j not connected by an edge in graph: x i x j x \{i,j} So there should not be any factor ψ(x i, x j ) in the factorized form of the joint Möller/Mori 55
56 Cliques A clique in a graph is a subset of nodes such that there is a link between every pair of nodes in the subset A maximal clique is a clique for which one cannot add another node and have the set remain a clique x 1 x 2 x 3 x 4 Möller/Mori 56
57 MRF Joint Distribution Note that nodes in a clique cannot be made conditionally independent from each other So defining factors ψ( ) on nodes in a clique is safe The joint distribution for a Markov random field is: p(x 1,..., x K ) = 1 ψ C (x C ) Z where x C is the set of nodes in clique C, and the product runs over all maximal cliques Each ψ C (x C ) 0 Z is a normalization constant C Möller/Mori 57
58 MRF Joint Distribution Example The joint distribution for a Markov random field is: p(x 1,..., x 4 ) = 1 ψ C (x C ) Z C x 1 x 2 = 1 Z ψ 123(x 1, x 2, x 3 )ψ 234 (x 2, x 3, x 4 ) Note that maximal cliques subsume smaller ones: ψ 123 (x 1, x 2, x 3 ) could include ψ 12 (x 1, x 2 ), though sometimes smaller cliques are explicitly used for clarity x 3 x 4 Möller/Mori 58
59 MRF Joint - Terminology The joint distribution for a Markov random field is: p(x 1,..., x K ) = 1 ψ C (x C ) Z Each ψ C (x C ) is called a potential function Z, the normalization constant, is called the partition function: Z = ψ C (x C ) x C Z is very costly to compute, since it is a sum/integral over all possible states for all variables in x Don t always need to evaluate it though, will cancel for computing conditional probabilities Möller/Mori 59 C
60 Hammersley-Clifford The definition of the joint: p(x 1,..., x K ) = 1 ψ C (x C ) Z Note that we started with particular conditional independences We then formulated the factorization based on clique potentials This formulation resulted in the right conditional independences The converse is true as well, any distribution with the conditional independences given by the undirected graph can be represented using a product of clique potentials This is the Hammersley-Clifford theorem C Möller/Mori 60
61 Energy Functions Often use exponential, which is non-negative, to define potential functions: ψ C (x C ) = exp{ E C (x C )} Minus sign by convention E C (x C ) is called an energy function From physics, low energy = high probability This exponential representation is known as the Boltzmann distribution Möller/Mori 61
62 Energy Functions - Intuition Joint distribution nicely rearranges as p(x 1,..., x K ) = 1 ψ C (x C ) Z C = 1 Z exp{ C E C (x C )} Intuition about potential functions: ψ C are describing good (low energy) sets of states for adjacent nodes An example of this is next Möller/Mori 62
63 Image Denoising Consider the problem of trying to correct (denoise) an image that has been corrupted Assume image is binary Observed (noisy) pixel values y i { 1, +1} Unobserved true pixel values x i { 1, +1} Möller/Mori 63
64 Image Denoising - Graphical Model y i x i Cliques containing each true pixel value x i { 1, +1} and observed value y i { 1, +1} Observed pixel value is usually same as true pixel value Energy function ηx i y i, η > 0, lower energy (better) if x i = y i Cliques containing adjacent true pixel values x i, x j Nearby pixel values are usually the same Energy function βx i x j, β > 0, lower energy (better) if x i = x j Möller/Mori 64
65 Image Denoising - Graphical Model yi xi Complete energy function: E(x, y) = β {i,j} x i x j η i x i y i Joint distribution: p(x, y) = 1 exp{ E(x, y)} Z Or, as potential functions ψ n (x i, x j ) = exp(βx i x j ), ψ p (x i, y i ) = exp(ηx i y i ): p(x, y) = 1 ψ n (x i, x j ) ψ p (x i, y i ) Z i,j i Möller/Mori 65
66 Image Denoising - Inference The denoising query is arg max x p(x y) Two approaches: Iterated conditional modes (ICM): hill climbing in x, one variable x i at a time Simple to compute, Markov blanket is just observation plus neighbouring pixels Graph cuts: formulate as max-flow/min-cut problem, exact inference (for this graph) Möller/Mori 66
67 Converting Directed Graphs into Undirected Graphs x 1 x 2 x N 1 x N Consider a simple directed chain graph: p(x) = p(x 1 )p(x 2 x 1 )p(x 3 x 2 )... p(x N x N 1 ) Can convert to undirected graph p(x) = 1 Z ψ 1,2(x 1, x 2 )ψ 2,3 (x 2, x 3 )... ψ N 1,N (x N 1, x N ) where ψ 1,2 = p(x 1 )p(x 2 x 1 ), all other ψ k 1,k = p(x k x k 1 ), Z = 1 Möller/Mori 67
68 Converting Directed Graphs into Undirected Graphs The chain was straight-forward because for each conditional p(x i pa i ), nodes x i pa i were contained in one clique Hence we could define that clique potential to include that conditional For a general undirected graph we can force this to occur by marrying the parents Add links between all parents in pa i This process known as moralization, creating a moral graph Möller/Mori 68
69 Strong Morals x 1 x 3 x 2 x 1 x 3 x 2 x 4 x 4 Start with directed graph on left Add undirected edges between all parents of each node Remove directionality from original edges Möller/Mori 69
70 Constructing Potential Functions x 1 x 3 x 1 x 3 x 2 x 2 x 4 x 4 Initialize all potential functions to be 1 With moral graph, for each p(x i pa i ), there is at least one clique which contains all of x i pa i Multiply p(x i pa i ) into potential function for one of these cliques Z = 1 again since: p(x) = C ψ C (x C ) = i p(x i pa i ) which is already normalized Möller/Mori 70
71 Equivalence Between Graph Types C A B A B C D Note that the moralized undirected graph loses some of the conditional independence statements of the directed graph Directed graph: A B, A B C, cannot be represented using undirected graph Undirected graph: A B, A B C D, C D A B cannot be represented using directed graph Möller/Mori 71
72 Inference Inference is the process of answering queries such as p(x n x e = e) We will focus on computing marginal posterior distributions over single variables x n using p(x n x e = e) p(x n, x e = e) The proportionality constant can be obtained by enforcing x n p(x n x e = e) = 1 Möller/Mori 72
73 Inference on a Chain Consider a simple undirected chain For inference, we want to compute p(x n, x e = e) First, we ll show how to compute p(x n ) p(x n, x e = e) will be a simple modification of this Möller/Mori 73
74 Inference on a Chain The naive method of computing the marginal p(x n ) is to write down the factored form of the joint, and marginalize (sum out) all other variables: p(x n ) = p(x) x 1 x n 1 x n+1 x N = ψ C (x C ) Z x 1 x n 1 x n+1 x N This would be slow O(K N ) work if each variable could take K values C Möller/Mori 74
75 Inference on a Chain However, due to the factorization terms in this summation can be rearranged nicely This will lead to efficient algorithms Möller/Mori 75
76 Simple Algebra This efficiency comes from a very simple distributivity ab + ac = a(b + c) Or more complicated version n i=1 j=1 n a i b j = a 1 b 1 + a 1 b a n b n = (a a n )(b b n ) Much faster to do right hand side (2(n 1) additions, 1 multiplication) than left hand side (n 2 multiplications, n 2 1 additions) Möller/Mori 76
77 A Simple Chain First consider a chain with 3 nodes, and computing p(x 3 ): p(x 3 ) = x 1 ψ 12 (x 1, x 2 )ψ 23 (x 2, x 3 ) x 2 = [ ] ψ 23 (x 2, x 3 ) ψ 12 (x 1, x 2 ) x 2 x 1 Möller/Mori 77
78 Performing the sums p(x 3 ) = x 2 ψ 23 (x 2, x 3 ) [ x 1 ψ 12 (x 1, x 2 ) For example, if x i are binary: [ ] [ ] a b s t ψ 12 (x 1, x 2 ) = x 1 ψ c d 23 (x 2, x 3 ) = x 2 u v }{{}}{{} x 2 x 3 ψ 12 (x 1, x 2 ) = [ a + c b + d ] µ 12(x 2 ) x 1 }{{} x 2 [ ] s(a + c) t(a + c) ψ 23 (x 2, x 3 ) µ 12 (x 2 ) = x 2 u(b + d) v(b + d) }{{} x 3 p(x 3 ) = [ s(a + c) + u(b + d) t(a + c) + v(b + d) ] Möller/Mori 78 ]
79 Complexity of Inference There were two types of operations Summation x 1 ψ 12 (x 1, x 2 ) K K numbers in ψ 12, takes O(K 2 ) time Multiplication ψ 23 (x 2, x 3 ) µ 12 (x 2 ) Again O(K 2 ) work For a chain of length N, we repeat these operations N 1 times each O(NK 2 ) work, versus O(N K ) for naive evaluation Möller/Mori 79
80 More complicated chain Now consider a 5 node chain, again asking for p(x 3 ) p(x 3 ) = x 1 ψ 12 (x 1, x 2 )ψ 23 (x 2, x 3 )ψ 34 (x 3, x 4 )ψ 45 (x 4, x 5 ) x 2 x 4 x 5 = ψ 12 (x 1, x 2 )ψ 23 (x 2, x 3 ) ψ 34 (x 3, x 4 )ψ 45 (x 4, x 5 ) x 2 x 1 x 4 x 5 [ ] [ = ψ 12 (x 1, x 2 )ψ 23 (x 2, x 3 ) ψ 34 (x 3, x 4 )ψ 45 (x 4, x 5 x 1 x 5 x 2 Each of these factors resembles the previous, and can be computed efficiently Again O(NK 2 ) work x 4 Möller/Mori 80
81 Message Passing µ α (x n 1 ) µ α (x n ) µ β (x n ) µ β (x n+1 ) x 1 x n 1 x n x n+1 x N The factors can be thought of as messages being passed between nodes in the graph µ 12 (x 2 ) x 1 ψ 12 (x 1, x 2 ) is a message passed from node x 1 to node x 2 containing all information in node x 1 In general, µ k 1,k (x k ) = x k 1 ψ k 1,k (x k 1, x k )µ k 2,k 1 (x k 1 ) Possible to do so because of conditional independence Möller/Mori 81
82 Computing All Marginals µ α (x n 1 ) µ α (x n ) µ β (x n ) µ β (x n+1 ) x 1 x n 1 x n x n+1 x N Computing one marginal p(x n ) takes O(NK 2 ) time Naively running same algorithms for all nodes in a chain would take O(N 2 K 2 ) time But this isn t necessary, same messages can be reused at all nodes in the chain Pass all messages from one end of the chain to the other, pass all messages in the other direction too Can compute marginal at any node by multiplying the two messages delivered to the node 2(N 1)K 2 work, twice as much as for just one node Möller/Mori 82
83 Including Evidence If a node x k 1 = e is observed, simply clamp to observed value rather than summing: µ k 1,k (x k ) = x k 1 ψ k 1,k (x k 1, x k )µ k 2,k 1 (x k 1 ) becomes µ k 1,k (x k ) = ψ k 1,k (x k 1 = e, x k )µ k 2,k 1 (x k 1 = e) Möller/Mori 83
84 Trees The algorithm for a tree-structured graph is very similar to that for chains Formulation in PRML uses factor graphs, we ll just give the intuition here Consider calcuating the marginal p(x n ) for the center node of the graph at right Treat x n as root of tree, pass messages from leaf nodes up to root Möller/Mori 84
85 Trees Message passing similar to that in chains, but possibly multiple messages reaching a node With multiple messages, multiply them together As before, sum out variables µ k 1,k (x k ) = x k 1 ψ k 1,k (x k 1, x k )µ k 2,k 1 (x k 1 ) Known as sum-product algorithm Complexity still O(NK 2 ) Möller/Mori 85
86 Most Likely Configuration A similar algorithm exists for finding arg max x 1,...,x N p(x 1,..., x N ) Replace summation operations with maximize operations Maximum of products at each node Known as max-sum, since often take log-probability to avoid underflow errors Möller/Mori 86
87 General Graphs Junction tree algorithm is an exact inference method for arbitrary graphs A particular tree structure defined over cliques of variables Inference ends up being exponential in maximum clique size Therefore slow in many cases Approximate inference techniques Loopy belief propagation: run message passing scheme (sum-product) for a while Sometimes works Not guaranteed to converge Variational methods: approximate desired distribution using analytically simple forms, find parameters to make these forms similar to actual desired distribution (Ch. 10, we won t cover) Sampling methods: represent desired distribuion with a set of samples, as more samples are used, obtain more accurate representation (Ch. 11, we will cover) Möller/Mori 87
88 Conclusion Readings: Ch. 8 Graphical models depict conditional independence assumptions Directed graphs (Bayesian networks) Factorization of joint distribution as conditional on node given parents Undirected graphs (Markov random fields) Factorization of joint distribution as clique potential functions Inference algorithm sum-product, based on local message passing Works for tree-structured graphs Non-tree-structured graphs, either slow exact or approximate inference Möller/Mori 88
Graphical Models - Part I
Graphical Models - Part I Oliver Schulte - CMPT 726 Bishop PRML Ch. 8, some slides from Russell and Norvig AIMA2e Outline Probabilistic Models Bayesian Networks Markov Random Fields Inference Outline Probabilistic
More informationProbabilistic Models Bayesian Networks Markov Random Fields Inference. Graphical Models. Foundations of Data Analysis
Graphical Models Foundations of Data Analysis Torsten Möller and Thomas Torsney-Weir Möller/Mori 1 Reading Chapter 8 Pattern Recognition and Machine Learning by Bishop some slides from Russell and Norvig
More informationGraphical Models - Part II
Graphical Models - Part II Bishop PRML Ch. 8 Alireza Ghane Outline Probabilistic Models Bayesian Networks Markov Random Fields Inference Graphical Models Alireza Ghane / Greg Mori 1 Outline Probabilistic
More informationCS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón
CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY Santiago Ontañón so367@drexel.edu Summary Probability is a rigorous formalism for uncertain knowledge Joint probability distribution specifies probability of
More informationBayesian networks. Chapter Chapter
Bayesian networks Chapter 14.1 3 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions Chapter 14.1 3 2 Bayesian networks A simple, graphical notation for conditional independence assertions
More informationBayesian networks. Chapter Chapter Outline. Syntax Semantics Parameterized distributions. Chapter
Bayesian networks Chapter 14.1 3 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions Chapter 14.1 3 2 Bayesian networks A simple, graphical notation for conditional independence assertions
More informationBayesian networks. Chapter AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.
Bayesian networks Chapter 14.1 3 AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions AIMA2e Slides,
More informationCS Belief networks. Chapter
CS 580 1 Belief networks Chapter 15.1 2 CS 580 2 Outline Conditional independence Bayesian networks: syntax and semantics Exact inference Approximate inference CS 580 3 Independence Two random variables
More informationIntroduction to Artificial Intelligence Belief networks
Introduction to Artificial Intelligence Belief networks Chapter 15.1 2 Dieter Fox Based on AIMA Slides c S. Russell and P. Norvig, 1998 Chapter 15.1 2 0-0 Outline Bayesian networks: syntax and semantics
More informationBayesian networks: Modeling
Bayesian networks: Modeling CS194-10 Fall 2011 Lecture 21 CS194-10 Fall 2011 Lecture 21 1 Outline Overview of Bayes nets Syntax and semantics Examples Compact conditional distributions CS194-10 Fall 2011
More informationOutline } Conditional independence } Bayesian networks: syntax and semantics } Exact inference } Approximate inference AIMA Slides cstuart Russell and
Belief networks Chapter 15.1{2 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 15.1{2 1 Outline } Conditional independence } Bayesian networks: syntax and semantics } Exact inference } Approximate
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More informationCS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016 Plan for today and next week Today and next time: Bayesian networks (Bishop Sec. 8.1) Conditional
More informationDirected Graphical Models
CS 2750: Machine Learning Directed Graphical Models Prof. Adriana Kovashka University of Pittsburgh March 28, 2017 Graphical Models If no assumption of independence is made, must estimate an exponential
More informationCourse Overview. Summary. Outline
Course Overview Lecture 9 Baysian Networks Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig Introduction Artificial
More informationBayesian networks. Chapter 14, Sections 1 4
Bayesian networks Chapter 14, Sections 1 4 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 14, Sections 1 4 1 Bayesian networks
More informationExample. Bayesian networks. Outline. Example. Bayesian networks. Example contd. Topology of network encodes conditional independence assertions:
opology of network encodes conditional independence assertions: ayesian networks Weather Cavity Chapter 4. 3 oothache Catch W eather is independent of the other variables oothache and Catch are conditionally
More informationReview: Bayesian learning and inference
Review: Bayesian learning and inference Suppose the agent has to make decisions about the value of an unobserved query variable X based on the values of an observed evidence variable E Inference problem:
More informationPart I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS
Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a
More informationProbabilistic Reasoning Systems
Probabilistic Reasoning Systems Dr. Richard J. Povinelli Copyright Richard J. Povinelli rev 1.0, 10/7/2001 Page 1 Objectives You should be able to apply belief networks to model a problem with uncertainty.
More informationObjectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.
Copyright Richard J. Povinelli rev 1.0, 10/1//2001 Page 1 Probabilistic Reasoning Systems Dr. Richard J. Povinelli Objectives You should be able to apply belief networks to model a problem with uncertainty.
More informationOutline. Bayesian networks. Example. Bayesian networks. Example contd. Example. Syntax. Semantics Parameterized distributions. Chapter 14.
Outline Syntax ayesian networks Semantics Parameterized distributions Chapter 4. 3 Chapter 4. 3 Chapter 4. 3 2 ayesian networks simple, graphical notation for conditional independence assertions and hence
More informationBayesian networks. Chapter Chapter
Bayesian networks Chapter 14.1 3 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions Chapter 14.1 3 2 Bayesian networks A simple, graphical notation for conditional independence assertions
More informationProbabilistic Reasoning. (Mostly using Bayesian Networks)
Probabilistic Reasoning (Mostly using Bayesian Networks) Introduction: Why probabilistic reasoning? The world is not deterministic. (Usually because information is limited.) Ways of coping with uncertainty
More informationBelief Networks for Probabilistic Inference
1 Belief Networks for Probabilistic Inference Liliana Mamani Sanchez lmamanis@tcd.ie October 27, 2015 Background Last lecture we saw that using joint distributions for probabilistic inference presented
More informationBayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination
Outline Syntax Semantics Exact inference by enumeration Exact inference by variable elimination s A simple, graphical notation for conditional independence assertions and hence for compact specication
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationCS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine
CS 484 Data Mining Classification 7 Some slides are from Professor Padhraic Smyth at UC Irvine Bayesian Belief networks Conditional independence assumption of Naïve Bayes classifier is too strong. Allows
More informationBelief networks Chapter 15.1í2. AIMA Slides cæstuart Russell and Peter Norvig, 1998 Chapter 15.1í2 1
Belief networks Chapter 15.1í2 AIMA Slides cæstuart Russell and Peter Norvig, 1998 Chapter 15.1í2 1 í Conditional independence Outline í Bayesian networks: syntax and semantics í Exact inference í Approximate
More informationUncertainty and Bayesian Networks
Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks
More informationGraphical Models and Kernel Methods
Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.
More informationCSE 473: Artificial Intelligence Autumn 2011
CSE 473: Artificial Intelligence Autumn 2011 Bayesian Networks Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Outline Probabilistic models
More informationBayesian Networks. Vibhav Gogate The University of Texas at Dallas
Bayesian Networks Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 6364) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1 Outline
More informationMachine Learning Lecture 14
Many slides adapted from B. Schiele, S. Roth, Z. Gharahmani Machine Learning Lecture 14 Undirected Graphical Models & Inference 23.06.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de
More informationBayesian Networks. Vibhav Gogate The University of Texas at Dallas
Bayesian Networks Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1 Outline
More informationProbabilistic Graphical Models (I)
Probabilistic Graphical Models (I) Hongxin Zhang zhx@cad.zju.edu.cn State Key Lab of CAD&CG, ZJU 2015-03-31 Probabilistic Graphical Models Modeling many real-world problems => a large number of random
More informationPROBABILISTIC REASONING SYSTEMS
PROBABILISTIC REASONING SYSTEMS In which we explain how to build reasoning systems that use network models to reason with uncertainty according to the laws of probability theory. Outline Knowledge in uncertain
More informationQuantifying uncertainty & Bayesian networks
Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationDirected and Undirected Graphical Models
Directed and Undirected Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Last Lecture Refresher Lecture Plan Directed
More informationBayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018
Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides have been adopted from Klein and Abdeel, CS188, UC Berkeley. Outline Probability
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDA132) Lecture 09 2017-02-15 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationUndirected Graphical Models
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Properties Properties 3 Generative vs. Conditional
More informationProbabilistic Models. Models describe how (a portion of) the world works
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models
More informationGraphical Models 359
8 Graphical Models Probabilities play a central role in modern pattern recognition. We have seen in Chapter 1 that probability theory can be expressed in terms of two simple equations corresponding to
More informationInformatics 2D Reasoning and Agents Semester 2,
Informatics 2D Reasoning and Agents Semester 2, 2017 2018 Alex Lascarides alex@inf.ed.ac.uk Lecture 23 Probabilistic Reasoning with Bayesian Networks 15th March 2018 Informatics UoE Informatics 2D 1 Where
More informationProbabilistic Reasoning. Kee-Eung Kim KAIST Computer Science
Probabilistic Reasoning Kee-Eung Kim KAIST Computer Science Outline #1 Acting under uncertainty Probabilities Inference with Probabilities Independence and Bayes Rule Bayesian networks Inference in Bayesian
More informationUncertainty. Chapter 13
Uncertainty Chapter 13 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Uncertainty Let s say you want to get to the airport in time for a flight. Let action A
More informationProbabilistic Models
Bayes Nets 1 Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables
More informationUncertainty. Chapter 13. Chapter 13 1
Uncertainty Chapter 13 Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Chapter 13 2 Uncertainty Let action A t = leave for airport t minutes before
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Bayes Nets Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 14: Bayes Nets II Independence 3/9/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements
More informationProbabilistic Reasoning
Probabilistic Reasoning Philipp Koehn 4 April 2017 Outline 1 Uncertainty Probability Inference Independence and Bayes Rule 2 uncertainty Uncertainty 3 Let action A t = leave for airport t minutes before
More informationCS 188: Artificial Intelligence Fall 2009
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes Nets 10/13/2009 Dan Klein UC Berkeley Announcements Assignments P3 due yesterday W2 due Thursday W1 returned in front (after lecture) Midterm
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDAF70) Lecture 04 2019-02-01 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationBayesian Belief Network
Bayesian Belief Network a! b) = a) b) toothache, catch, cavity, Weather = cloudy) = = Weather = cloudy) toothache, catch, cavity) The decomposition of large probabilistic domains into weakly connected
More informationUncertainty. Chapter 13, Sections 1 6
Uncertainty Chapter 13, Sections 1 6 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 13, Sections 1 6 1 Outline Uncertainty Probability
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Bayes Nets Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188
More informationUncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13
Uncertainty AIMA2e Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and ayes Rule 2 Uncertainty Let action A t = leave for airport t minutes before flight Will A
More informationOutline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car
CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference)
More informationChapter 16. Structured Probabilistic Models for Deep Learning
Peng et al.: Deep Learning and Practice 1 Chapter 16 Structured Probabilistic Models for Deep Learning Peng et al.: Deep Learning and Practice 2 Structured Probabilistic Models way of using graphs to describe
More informationImplementing Machine Reasoning using Bayesian Network in Big Data Analytics
Implementing Machine Reasoning using Bayesian Network in Big Data Analytics Steve Cheng, Ph.D. Guest Speaker for EECS 6893 Big Data Analytics Columbia University October 26, 2017 Outline Introduction Probability
More informationUncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!
Uncertainty and Belief Networks Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2! This lecture Conditional Independence Bayesian (Belief) Networks: Syntax and semantics
More informationBayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1
Bayes Networks CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 59 Outline Joint Probability: great for inference, terrible to obtain
More informationThis lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004
This lecture Conditional Independence Bayesian (Belief) Networks: Syntax and semantics Reading Chapter 14.1-14.2 Propositions and Random Variables Letting A refer to a proposition which may either be true
More informationCOMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012
COMP9414/ 9814/ 3411: Artificial Intelligence 14. Uncertainty Russell & Norvig, Chapter 13. COMP9414/9814/3411 14s1 Uncertainty 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence
More informationConditional Independence
Conditional Independence Sargur Srihari srihari@cedar.buffalo.edu 1 Conditional Independence Topics 1. What is Conditional Independence? Factorization of probability distribution into marginals 2. Why
More informationUncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability
Problem of Logic Agents 22c:145 Artificial Intelligence Uncertainty Reading: Ch 13. Russell & Norvig Logic-agents almost never have access to the whole truth about their environments. A rational agent
More informationRational decisions From predicting the effect to optimal intervention
Rational decisions From predicting the effect to optimal intervention ADAPTED FROM AIMA SLIDES Russel&Norvig:Artificial Intelligence: a modern approach AIMA: Rational decisions AIMA: Rational decisions
More informationBayesian Networks. Philipp Koehn. 29 October 2015
Bayesian Networks Philipp Koehn 29 October 2015 Outline 1 Bayesian Networks Parameterized distributions Exact inference Approximate inference 2 bayesian networks Bayesian Networks 3 A simple, graphical
More informationBayesian Networks. Philipp Koehn. 6 April 2017
Bayesian Networks Philipp Koehn 6 April 2017 Outline 1 Bayesian Networks Parameterized distributions Exact inference Approximate inference 2 bayesian networks Bayesian Networks 3 A simple, graphical notation
More informationRational decisions. Chapter 16. Chapter 16 1
Rational decisions Chapter 16 Chapter 16 1 Outline Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information Chapter 16 2 Preferences An agent chooses among prizes
More informationAxioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.
Bayesian Networks Today we ll introduce Bayesian Networks. This material is covered in chapters 13 and 14. Chapter 13 gives basic background on probability and Chapter 14 talks about Bayesian Networks.
More informationAnnouncements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation
CS 188: Artificial Intelligence Spring 2010 Lecture 15: Bayes Nets II Independence 3/9/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Current
More informationIntroduction to Probabilistic Graphical Models
Introduction to Probabilistic Graphical Models Sargur Srihari srihari@cedar.buffalo.edu 1 Topics 1. What are probabilistic graphical models (PGMs) 2. Use of PGMs Engineering and AI 3. Directionality in
More informationBayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III
CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III Bayesian Networks Announcements: Drop deadline is this Sunday Nov 5 th. All lecture notes needed for T3 posted (L13,,L17). T3 sample
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 4 Occam s Razor, Model Construction, and Directed Graphical Models https://people.orie.cornell.edu/andrew/orie6741 Cornell University September
More informationBayesian networks (1) Lirong Xia
Bayesian networks (1) Lirong Xia Random variables and joint distributions Ø A random variable is a variable with a domain Random variables: capital letters, e.g. W, D, L values: small letters, e.g. w,
More informationBayesian networks. AIMA2e Chapter 14
Bayesian networks AIMA2e Chapter 14 Outline Syntax Semantics Parameterized distributions Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification
More informationCMPSCI 240: Reasoning about Uncertainty
CMPSCI 240: Reasoning about Uncertainty Lecture 17: Representing Joint PMFs and Bayesian Networks Andrew McGregor University of Massachusetts Last Compiled: April 7, 2017 Warm Up: Joint distributions Recall
More informationReview: Directed Models (Bayes Nets)
X Review: Directed Models (Bayes Nets) Lecture 3: Undirected Graphical Models Sam Roweis January 2, 24 Semantics: x y z if z d-separates x and y d-separation: z d-separates x from y if along every undirected
More informationAnother look at Bayesian. inference
Another look at Bayesian A general scenario: - Query variables: X inference - Evidence (observed) variables and their values: E = e - Unobserved variables: Y Inference problem: answer questions about the
More informationUncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004
Uncertainty Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, 2004 Administration PA 1 will be handed out today. There will be a MATLAB tutorial tomorrow, Friday, April 2 in AP&M 4882 at
More informationOutline. Uncertainty. Methods for handling uncertainty. Uncertainty. Making decisions under uncertainty. Probability. Uncertainty
Outline Uncertainty Uncertainty Chapter 13 Probability Syntax and Semantics Inference Independence and ayes Rule Chapter 13 1 Chapter 13 2 Uncertainty et action A t =leaveforairportt minutes before flight
More information{ p if x = 1 1 p if x = 0
Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =
More informationDirected and Undirected Graphical Models
Directed and Undirected Graphical Models Adrian Weller MLSALT4 Lecture Feb 26, 2016 With thanks to David Sontag (NYU) and Tony Jebara (Columbia) for use of many slides and illustrations For more information,
More informationBased on slides by Richard Zemel
CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationInference in Graphical Models Variable Elimination and Message Passing Algorithm
Inference in Graphical Models Variable Elimination and Message Passing lgorithm Le Song Machine Learning II: dvanced Topics SE 8803ML, Spring 2012 onditional Independence ssumptions Local Markov ssumption
More informationArtificial Intelligence Bayesian Networks
Artificial Intelligence Bayesian Networks Stephan Dreiseitl FH Hagenberg Software Engineering & Interactive Media Stephan Dreiseitl (Hagenberg/SE/IM) Lecture 11: Bayesian Networks Artificial Intelligence
More informationBayes Nets: Independence
Bayes Nets: Independence [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Bayes Nets A Bayes
More informationBayesian Networks. Motivation
Bayesian Networks Computer Sciences 760 Spring 2014 http://pages.cs.wisc.edu/~dpage/cs760/ Motivation Assume we have five Boolean variables,,,, The joint probability is,,,, How many state configurations
More informationA graph contains a set of nodes (vertices) connected by links (edges or arcs)
BOLTZMANN MACHINES Generative Models Graphical Models A graph contains a set of nodes (vertices) connected by links (edges or arcs) In a probabilistic graphical model, each node represents a random variable,
More informationBayesian Networks. Material used
Bayesian Networks Material used Halpern: Reasoning about Uncertainty. Chapter 4 Stuart Russell and Peter Norvig: Artificial Intelligence: A Modern Approach 1 Random variables 2 Probabilistic independence
More informationCS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008 Lecture 14: Bayes Nets 10/14/2008 Dan Klein UC Berkeley 1 1 Announcements Midterm 10/21! One page note sheet Review sessions Friday and Sunday (similar) OHs on
More informationIntroduction to Artificial Intelligence. Unit # 11
Introduction to Artificial Intelligence Unit # 11 1 Course Outline Overview of Artificial Intelligence State Space Representation Search Techniques Machine Learning Logic Probabilistic Reasoning/Bayesian
More informationStatistical Approaches to Learning and Discovery
Statistical Approaches to Learning and Discovery Graphical Models Zoubin Ghahramani & Teddy Seidenfeld zoubin@cs.cmu.edu & teddy@stat.cmu.edu CALD / CS / Statistics / Philosophy Carnegie Mellon University
More information1 Undirected Graphical Models. 2 Markov Random Fields (MRFs)
Machine Learning (ML, F16) Lecture#07 (Thursday Nov. 3rd) Lecturer: Byron Boots Undirected Graphical Models 1 Undirected Graphical Models In the previous lecture, we discussed directed graphical models.
More informationBayes Networks 6.872/HST.950
Bayes Networks 6.872/HST.950 What Probabilistic Models Should We Use? Full joint distribution Completely expressive Hugely data-hungry Exponential computational complexity Naive Bayes (full conditional
More informationArtificial Intelligence Bayes Nets: Independence
Artificial Intelligence Bayes Nets: Independence Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter
More informationRational decisions. Chapter 16 1
Rational decisions Chapter 16 Chapter 16 1 Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information Outline Chapter 16 2 Preferences An agent chooses among prizes
More information