Bayesian networks: Modeling

Similar documents
Bayesian networks. Chapter Chapter

Bayesian networks. Chapter Chapter Outline. Syntax Semantics Parameterized distributions. Chapter

Bayesian networks. Chapter AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.

CS Belief networks. Chapter

CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón

Introduction to Artificial Intelligence Belief networks

Example. Bayesian networks. Outline. Example. Bayesian networks. Example contd. Topology of network encodes conditional independence assertions:

Outline } Conditional independence } Bayesian networks: syntax and semantics } Exact inference } Approximate inference AIMA Slides cstuart Russell and

Outline. Bayesian networks. Example. Bayesian networks. Example contd. Example. Syntax. Semantics Parameterized distributions. Chapter 14.

Graphical Models - Part I

Course Overview. Summary. Outline

Belief networks Chapter 15.1í2. AIMA Slides cæstuart Russell and Peter Norvig, 1998 Chapter 15.1í2 1

Belief Networks for Probabilistic Inference

Probabilistic Models Bayesian Networks Markov Random Fields Inference. Graphical Models. Foundations of Data Analysis

Graphical Models - Part II

p(x) p(x Z) = y p(y X, Z) = αp(x Y, Z)p(Y Z)

Bayesian Networks. Philipp Koehn. 6 April 2017

Bayesian Networks. Philipp Koehn. 29 October 2015

Bayesian networks. Chapter 14, Sections 1 4

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Bayesian networks. Chapter Chapter

Review: Bayesian learning and inference

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Informatics 2D Reasoning and Agents Semester 2,

Probabilistic Reasoning Systems

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

14 PROBABILISTIC REASONING

Probabilistic representation and reasoning

Directed Graphical Models

Uncertainty and Bayesian Networks

Bayesian networks. AIMA2e Chapter 14

PROBABILISTIC REASONING SYSTEMS

Probabilistic representation and reasoning

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Rational decisions. Chapter 16. Chapter 16 1

Rational decisions From predicting the effect to optimal intervention

Another look at Bayesian. inference

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

Rational decisions. Chapter 16 1

Uncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!

This lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004

Lecture 10: Bayesian Networks and Inference

Quantifying uncertainty & Bayesian networks

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Bayesian Networks. Motivation

Bayesian Belief Network

Probabilistic Models. Models describe how (a portion of) the world works

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Bayesian Networks. Material used

CSE 473: Artificial Intelligence Autumn 2011

Bayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

Bayesian networks (1) Lirong Xia

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Introduction to Artificial Intelligence. Unit # 11

Artificial Intelligence Bayes Nets: Independence

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

PROBABILISTIC REASONING Outline

BAYESIAN NETWORKS AIMA2E CHAPTER (SOME TOPICS EXCLUDED) AIMA2e Chapter (some topics excluded) 1

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Bayes Nets: Independence

CS 188: Artificial Intelligence Spring Announcements

CS 5522: Artificial Intelligence II

Probabilistic Models

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence Fall 2009

CS 343: Artificial Intelligence

Learning Bayesian Networks (part 1) Goals for the lecture

Axioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.

Directed Graphical Models or Bayesian Networks

Decision. AI Slides (5e) c Lin

Bayesian Networks. Machine Learning, Fall Slides based on material from the Russell and Norvig AI Book, Ch. 14

Bayesian Networks BY: MOHAMAD ALSABBAGH

Defining Things in Terms of Joint Probability Distribution. Today s Lecture. Lecture 17: Uncertainty 2. Victor R. Lesser

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

Reasoning Under Uncertainty

Directed and Undirected Graphical Models

Local learning in probabilistic networks with hidden variables

School of EECS Washington State University. Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

Artificial Intelligence Bayesian Networks

Introduction to Probabilistic Reasoning. Image credit: NASA. Assignment

Chris Bishop s PRML Ch. 8: Graphical Models

Foundations of Artificial Intelligence

COMP5211 Lecture Note on Reasoning under Uncertainty

Brief Intro. to Bayesian Networks. Extracted and modified from four lectures in Intro to AI, Spring 2008 Paul S. Rosenbloom

Outline. CSE 473: Artificial Intelligence Spring Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

CS 188: Artificial Intelligence Fall 2008

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Bayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018

Bayesian Networks. Alan Ri2er

Intelligent Systems: Reasoning and Recognition. Reasoning with Bayesian Networks

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Bayesian Networks (Part II)

Lecture 8: Bayesian Networks

Inference in Bayesian Networks

Transcription:

Bayesian networks: Modeling CS194-10 Fall 2011 Lecture 21 CS194-10 Fall 2011 Lecture 21 1

Outline Overview of Bayes nets Syntax and semantics Examples Compact conditional distributions CS194-10 Fall 2011 Lecture 21 2

Learning with complex probability models Learning cannot succeed without imposing some prior structure on the hypothesis space (by constraint or by preference) Generative models P (X θ) support MLE, MAP, and Bayesian learning for domains that (approximately) reflect the assumptions underlying the model: Naive Bayes conditional independence of attributes given class value Mixture models domain has a flat, discrete category structure All i.i.d. models model doesn t change over time Etc. Would like to express arbitrarily complex and flexible prior knowledge: Some attributes depend on others Categories have hierarchical structure; objects may be mixtures of several categories Observations at time t may depend on earlier observations Etc. CS194-10 Fall 2011 Lecture 21 3

Bayesian networks A simple, graphical notation for conditional independence assertions among a predefined set of random variables X j, j = 1,..., D and hence for compact specification of arbitrary joint distributions Syntax: a set of nodes, one per variable a directed, acyclic graph (link directly influences ) a set of parameters for each node given its parents: θ(x j P arents(x j )) In the simplest case, parameters consist of a conditional probability table (CPT) giving the distribution over X j for each combination of parent values CS194-10 Fall 2011 Lecture 21 4

Example Topology of network encodes conditional independence assertions: Weather Cavity Toothache Catch W eather is independent of the other variables T oothache and Catch are conditionally independent given Cavity CS194-10 Fall 2011 Lecture 21 5

Example I m at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn t call. Sometimes it s set off by minor earthquakes. Is there a burglar? Variables: Burglar, Earthquake, Alarm, JohnCalls, M arycalls Network topology reflects causal knowledge: A burglar can set the alarm off An earthquake can set the alarm off The alarm can cause Mary to call The alarm can cause John to call CS194-10 Fall 2011 Lecture 21 6

Example contd. Burglary P(B).001 Earthquake P(E).002 B T T F F E T F T F P(A B,E).95.94.29.001 Alarm JohnCalls A T F P(J A).90.05 MaryCalls A T F P(M A).70.01 CS194-10 Fall 2011 Lecture 21 7

Compactness A CPT for Boolean X j with L Boolean parents has B E 2 L rows for the combinations of parent values A Each row requires one parameter p for X j = true (the parameter for X j = false is just 1 p) J M If each variable has no more than L parents, the complete network requires O(D 2 L ) parameters I.e., grows linearly with D, vs. O(2 D ) for the full joint distribution For burglary net, 1 + 1 + 4 + 2 + 2 = 10 parameters (vs. 2 5 1 = 31) CS194-10 Fall 2011 Lecture 21 8

Global semantics Global semantics defines the full joint distribution as the product of the local conditional distributions: B E P (x 1,..., x D ) = Π D j = 1θ(x j parents(x j )). A e.g., P (j m a b e) = J M CS194-10 Fall 2011 Lecture 21 9

Global semantics Global semantics defines the full joint distribution B E as the product of the local conditional distributions: A P (x 1,..., x D ) = Π D j = 1θ(x j parents(x j )). e.g., P (j m a b e) J M = θ(j a)θ(m a)θ(a b, e)θ( b)θ( e) = 0.9 0.7 0.001 0.999 0.998 0.00063 CS194-10 Fall 2011 Lecture 21 10

Global semantics Global semantics defines the full joint distribution B E as the product of the local conditional distributions: A P (x 1,..., x D ) = Π D j = 1θ(x j parents(x j )). e.g., P (j m a b e) J M = θ(j a)θ(m a)θ(a b, e)θ( b)θ( e) = 0.9 0.7 0.001 0.999 0.998 0.00063 Theorem: θ(x j P arents(x j )) = P(X j P arents(x j )) CS194-10 Fall 2011 Lecture 21 11

Local semantics Local semantics: each node is conditionally independent of its nondescendants given its parents U 1... U m Z 1j X Z nj Y 1... Y n Theorem: Local semantics global semantics CS194-10 Fall 2011 Lecture 21 12

Markov blanket Each node is conditionally independent of all others given its Markov blanket: parents + children + children s parents U 1... U m Z 1j X Z nj Y 1... Y n CS194-10 Fall 2011 Lecture 21 13

Constructing Bayesian networks Need a method such that a series of locally testable assertions of conditional independence guarantees the required global semantics 1. Choose an ordering of variables X 1,..., X D 2. For j = 1 to D add X j to the network select parents from X 1,..., X j 1 such that P(X j P arents(x j )) = P(X j X 1,..., X j 1 ) i.e., X j is conditionally independent of other variables given parents This choice of parents guarantees the global semantics: P(X 1,..., X D ) = Π D j = 1P(X j X 1,..., X j 1 ) (chain rule) = Π D j = 1P(X j P arents(x j )) (by construction) CS194-10 Fall 2011 Lecture 21 14

Example: Car diagnosis Initial evidence: car won t start Testable variables (green), broken, so fix it variables (orange) Hidden variables (gray) ensure sparse structure, reduce parameters battery age alternator broken fanbelt broken battery dead no charging battery meter battery flat no oil no gas fuel line blocked starter broken lights oil light gas gauge car won t start dipstick CS194-10 Fall 2011 Lecture 21 15

Example: Car insurance Age GoodStudent RiskAversion SeniorTrain SocioEcon Mileage VehicleYear ExtraCar DrivingSkill MakeModel DrivingHist Antilock DrivQuality Airbag CarValue HomeBase AntiTheft Ruggedness Accident OwnDamage Theft Cushioning OtherCost OwnCost MedicalCost LiabilityCost PropertyCost CS194-10 Fall 2011 Lecture 21 16

Compact conditional distributions CPT grows exponentially with number of parents CPT becomes infinite with continuous-valued parent or child Solution: canonical distributions that are defined compactly Deterministic nodes are the simplest case: X = f(p arents(x)) for some function f E.g., Boolean functions NorthAmerican Canadian US Mexican E.g., numerical relationships among continuous variables LakeLevel t = inflow + precipitation - outflow - evaporation CS194-10 Fall 2011 Lecture 21 17

Compact conditional distributions contd. Noisy-OR distributions model multiple noninteracting causes 1) Parents U 1... U L include all causes (can add leak node) 2) Independent failure probability q l for each cause alone P (X U 1... U M, U M+1... U L ) = 1 Π M l = 1q l Cold F lu M alaria P (F ever) P ( F ever) F F F 0.0 1.0 F F T 0.9 0.1 F T F 0.8 0.2 F T T 0.98 0.02 = 0.2 0.1 T F F 0.4 0.6 T F T 0.94 0.06 = 0.6 0.1 T T F 0.88 0.12 = 0.6 0.2 T T T 0.988 0.012 = 0.6 0.2 0.1 Number of parameters linear in number of parents CS194-10 Fall 2011 Lecture 21 18

Hybrid (discrete+continuous) networks Discrete (Subsidy? and Buys?); continuous (Harvest and Cost) Subsidy? Harvest Cost Buys? Option 1: discretization possibly large errors, large CPTs Option 2: finitely parameterized canonical families 1) Continuous variable, discrete+continuous parents (e.g., Cost) 2) Discrete variable, continuous parents (e.g., Buys?) CS194-10 Fall 2011 Lecture 21 19

Continuous child variables Need one conditional density function for child variable given continuous parents, for each possible assignment to discrete parents Most common is the linear Gaussian model, e.g.,: P (Cost = c Harvest = h, Subsidy? = true) = N(a t h + b t, σ t )(c) 1 = exp 1 2 c (a t h + b t ) σ t 2π 2 σ t Mean Cost varies linearly with Harvest, variance is fixed Linear variation is unreasonable over the full range but works OK if the likely range of Harvest is narrow CS194-10 Fall 2011 Lecture 21 20

Continuous child variables P(c h, subsidy) 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 Cost c 10 12 0 2 468 Harvest h All-continuous network with LG distributions full joint distribution is a multivariate Gaussian Discrete+continuous LG network is a conditional Gaussian network i.e., a multivariate Gaussian over all continuous variables for each combination of discrete variable values CS194-10 Fall 2011 Lecture 21 21

Discrete variable w/ continuous parents Probability of Buys? given Cost should be a soft threshold: 1 0.8 0.6 P(c) 0.4 0.2 0 0 2 4 6 8 10 12 Cost c Probit distribution uses integral of Gaussian: Φ(x) = x N(0, 1)(x)dx P (Buys? = true Cost = c) = Φ(( c + µ)/σ) CS194-10 Fall 2011 Lecture 21 22

1. It s sort of the right shape Why the probit? 2. Can view as hard threshold whose location is subject to noise Cost Cost Noise Buys? CS194-10 Fall 2011 Lecture 21 23

Discrete variable contd. Sigmoid (or logit) distribution also used in neural networks: P (Buys? = true Cost = c) = 1 1 + exp( 2 c+µ σ ) Sigmoid has similar shape to probit but much longer tails: 1 0.8 Logit Probit P(buys c) 0.6 0.4 0.2 0 0 2 4 6 8 10 12 Cost c CS194-10 Fall 2011 Lecture 21 24

Summary (representation) Bayes nets provide a natural representation for (causally induced) conditional independence Topology + CPTs = compact representation of joint distribution fast learning from few examples Generally easy for (non)experts to construct Canonical distributions (e.g., noisy-or, linear Gaussian) compact representation of CPTs faster learning from fewer examples CS194-10 Fall 2011 Lecture 21 25