EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes
Uncertainty & Bayesian Networks Chapter 13/14
Outline Inference Independence and Bayes' Rule Chapter 14 Syntax Semantics Parameterized Distributions Inference in Bayesian Networks
On the final Same format as midterm closed book/closed notes Might test on all material of the quarter, including today (i.e., chapters 1-9, 13,14) but will not test on fuzzy logic. Will be weighted towards latter half of the course though.
Homework Last HW of the quarter Due next Wed, June 1 st, in class: Chapter 13: 13.3, 13.7, 13.16 Chapter 14: 14.2, 14.3, 14.10
Bayesian Networks Chapter 14
Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: a set of nodes, one per variable a directed, acyclic graph (link "directly influences") a conditional distribution for each node given its parents: P (X i Parents (X i )) In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over X i for each combination of parent values
Example contd.
Semantics The full joint distribution is defined as the product of the local conditional distributions: P (X 1,,X n ) = π i = 1 P (X i Parents(X i )) e.g., P(j m a b e) n = P (j a) P (m a) P (a b, e) P ( b) P ( e)
Local Semantics Local semantics: each node is conditionally independent of its nondescendants given its parents Thm: Local semantics global semantics
Example: car diagnosis Initial evidence: car won t start Testable variables (green), broken, so fix it variables (orange) Hidden variables (gray) ensure sparse structure, reduce parameters.
Example: car insurance
compact conditional dists. CPT grows exponentially with number of parents CPT becomes infinite with continuous-valued parent or child Solution: canonical distributions that are defined compactly Deterministic nodes are the simplest case: X = f(parents(x)), for some deterministic function f (could be logical form) E.g., boolean functions NorthAmerican Canadian US Mexican E.g,. numerical relationships among continuous variables
compact conditional dists. Noisy-Or distributions model multiple interacting causes: 1) Parents U 1,, U k include all possible causes 2) Independent failure probability q i for each cause alone X U 1 U 2 U k P(X U 1,, U j, U j+1,, U k ) = 1 - i=1j q i Number of parameters is linear in number of parents.
Hybrid (discrete+cont) networks Discrete (Subsidy? and Buys?); continuous (Harvest and Cost) Option 1: discretization large errors and large CPTs Option 2: finitely parameterized canonical families Gaussians, Logistic Distributions (as used in Neural Networks) Continuous variables, discrete+continuous parents (e.g., Cost) Discrete variables, continuous parents (e.g., Buys?)
Inference by enumeration by variable elimination by stochastic simulation by Markov chain Monte Carlo
Inference Tasks Simple queries: compute posterior marginal, P(X i E=e) e.g., P(NoGas Gague=empty,Lights=on,Starts=false) Conjunctive queries: P(X i,x j E=e) = P(X i E=e)P(X j X i,e=e) Optimal Decisions: decision networks include utility information; probabilistic inference required fro P(outcome action,evidence) Value of information: which evidence to seek next? Sensitivity analysis: which probability values are most critical? Explanation: why do I need a new starter motor?
Inference By Enumeration
Enumeration Algorithm
Evaluation Tree
Inference by Variable Elimination
Variable Elimination: Basic operations
Variable Elimination: Algorithm
Irrelevant variables
Irrelevant varaibles continued:
Complexity of exact inference
Inference by stochastic simulation
Sampling from empty network
Example
Rejection Sampling
Analysis of rejection sampling
Likelihood weighting
MCMC
Summary Bayesian networks provide a natural representation for (causally induced) conditional independence Topology + CPTs = compact representation of joint distribution Generally easy for domain experts to construct Exact inference by variable elimination polytime on polytrees, NP-hard on general graphs space can be exponential as well sampling approaches can help, as they only do approximate inference. Take my Graphical Models class if more interested (much more theoretical depth)