CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Similar documents
Directed Graphical Models

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Chris Bishop s PRML Ch. 8: Graphical Models

Graphical Models - Part I

Bayesian Networks. Motivation

Probabilistic Graphical Models (I)

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Machine Learning Lecture 14

Bayesian networks. Chapter 14, Sections 1 4

Graphical Models - Part II

Introduction to Artificial Intelligence Belief networks

CS 484 Data Mining. Classification 7. Some slides are from Professor Padhraic Smyth at UC Irvine

Probabilistic Graphical Models

Learning Bayesian Networks (part 1) Goals for the lecture

Review: Bayesian learning and inference

Conditional Independence

Bayesian Networks. Machine Learning, Fall Slides based on material from the Russell and Norvig AI Book, Ch. 14

CS 380: ARTIFICIAL INTELLIGENCE UNCERTAINTY. Santiago Ontañón

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

PROBABILISTIC REASONING SYSTEMS

Chapter 16. Structured Probabilistic Models for Deep Learning

Directed and Undirected Graphical Models

Introduction to Artificial Intelligence. Unit # 11

Bayesian networks. Chapter Chapter

Recall from last time. Lecture 3: Conditional independence and graph structure. Example: A Bayesian (belief) network.

p(x) p(x Z) = y p(y X, Z) = αp(x Y, Z)p(Y Z)

Informatics 2D Reasoning and Agents Semester 2,

Probabilistic Models Bayesian Networks Markov Random Fields Inference. Graphical Models. Foundations of Data Analysis

Bayesian Networks. Semantics of Bayes Nets. Example (Binary valued Variables) CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-III

Probabilistic Reasoning Systems

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Bayesian networks: Modeling

Rapid Introduction to Machine Learning/ Deep Learning

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

COMP5211 Lecture Note on Reasoning under Uncertainty

Bayesian networks. Chapter Chapter

Bayesian networks. Chapter AIMA2e Slides, Stuart Russell and Peter Norvig, Completed by Kazim Fouladi, Fall 2008 Chapter 14.

Graphical Models and Kernel Methods

Bayesian networks. Chapter Chapter Outline. Syntax Semantics Parameterized distributions. Chapter

Based on slides by Richard Zemel

Undirected Graphical Models

Graphical Models 359

Artificial Intelligence Bayes Nets: Independence

CS 5522: Artificial Intelligence II

Introduction to Probabilistic Reasoning. Image credit: NASA. Assignment

Bayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination

Probabilistic Models. Models describe how (a portion of) the world works

Axioms of Probability? Notation. Bayesian Networks. Bayesian Networks. Today we ll introduce Bayesian Networks.

Bayes Nets: Independence

CS 343: Artificial Intelligence

Another look at Bayesian. inference

Bayes Networks 6.872/HST.950

CS 188: Artificial Intelligence Fall 2009

CS 5522: Artificial Intelligence II

14 PROBABILISTIC REASONING

Probabilistic Models

3 : Representation of Undirected GM

Graphical models and causality: Directed acyclic graphs (DAGs) and conditional (in)dependence

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

CS 188: Artificial Intelligence Spring Announcements

Quantifying uncertainty & Bayesian networks

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Implementing Machine Reasoning using Bayesian Network in Big Data Analytics

CSE 473: Artificial Intelligence Autumn 2011

CS Belief networks. Chapter

Introduction to Bayesian Learning

School of EECS Washington State University. Artificial Intelligence

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Belief Network

1. what conditional independencies are implied by the graph. 2. whether these independecies correspond to the probability distribution

CS Lecture 4. Markov Random Fields

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

last two digits of your SID

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

Intelligent Systems: Reasoning and Recognition. Reasoning with Bayesian Networks

Preliminaries Bayesian Networks Graphoid Axioms d-separation Wrap-up. Bayesian Networks. Brandon Malone

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Probabilistic Graphical Models: Representation and Inference

A graph contains a set of nodes (vertices) connected by links (edges or arcs)

Artificial Intelligence Bayesian Networks

Statistical Approaches to Learning and Discovery

An Introduction to Bayesian Machine Learning

Bayesian Machine Learning

Bayesian Networks (Part II)

Introduction to Bayes Nets. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Cheng Soon Ong & Christian Walder. Canberra February June 2018

CS 188: Artificial Intelligence Spring Announcements

Probabilistic representation and reasoning

{ p if x = 1 1 p if x = 0

Directed Graphical Models or Bayesian Networks

Probabilistic Classification

1 Undirected Graphical Models. 2 Markov Random Fields (MRFs)

CSC 412 (Lecture 4): Undirected Graphical Models

ECE521 Tutorial 11. Topic Review. ECE521 Winter Credits to Alireza Makhzani, Alex Schwing, Rich Zemel and TAs for slides. ECE521 Tutorial 11 / 4

Bayesian belief networks

Bayesian Networks Introduction to Machine Learning. Matt Gormley Lecture 24 April 9, 2018

Lecture 6: Graphical Models

Transcription:

CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Plan for today and next week Today and next time: Bayesian networks (Bishop Sec. 8.1) Conditional independence (Bishop Sec. 8.2) Next week: Markov random fields (Bishop Sec. 8.3.1-2) Hidden Markov models (Bishop Sec. 13.1-2) Expectation maximization (Bishop Ch. 9)

Graphical Models If no assumption of independence is made, then an exponential number of parameters must be estimated for sound probabilistic inference. No realistic amount of training data is sufficient to estimate so many parameters. If a blanket assumption of conditional independence is made, efficient training and inference is possible, but such a strong assumption is rarely warranted. Graphical models use directed or undirected graphs over a set of random variables to explicitly specify variable dependencies and allow for less restrictive independence assumptions while limiting the number of parameters that must be estimated. Bayesian networks: Directed acyclic graphs indicate causal structure. Markov networks: Undirected graphs capture general dependencies. Slide credit: Ray Mooney

Learning Graphical Models Structure Learning: Learn the graphical structure of the network. Parameter Learning: Learn the real-valued parameters of the network. CPTs for Bayes nets Potential functions for Markov nets Slide credit: Ray Mooney

Parameter Learning If values for all variables are available during training, then parameter estimates can be directly estimated using frequency counts over the training data. If there are hidden variables, some form of gradient descent or Expectation Maximization (EM) must be used to estimate distributions for hidden variables. Adapted from Ray Mooney

Bayesian Networks Directed Acyclic Graph (DAG)

Bayesian Networks General Factorization

Bayesian Networks Directed Acyclic Graph (DAG) Nodes are random variables Edges indicate causal influences Burglary Earthquake Alarm JohnCalls MaryCalls Slide credit: Ray Mooney

Conditional Probability Tables Each node has a conditional probability table (CPT) that gives the probability of each of its values given every possible combination of values for its parents (conditioning case). Roots (sources) of the DAG that have no parents are given prior probabilities. P(B).001 Burglary Earthquake P(E).002 Alarm B E P(A) T T.95 T F.94 F T.29 F F.001 A P(J) T.90 F.05 JohnCalls MaryCalls A P(M) T.70 F.01 Slide credit: Ray Mooney

CPT Comments Probability of false not given since rows must add to 1. Example requires 10 parameters rather than 2 5 1=31 for specifying the full joint distribution. Number of parameters in the CPT for a node is exponential in the number of parents. Slide credit: Ray Mooney

Bayes Net Inference Given known values for some evidence variables, determine the posterior probability of some query variables. Example: Given that John calls, what is the probability that there is a Burglary???? John calls 90% of the time there Burglary Earthquake is an Alarm and the Alarm detects 94% of Burglaries so people Alarm generally think it should be fairly high. JohnCalls MaryCalls However, this ignores the prior probability of John calling. Slide credit: Ray Mooney

Bayes Net Inference Example: Given that John calls, what is the probability that there is a Burglary???? Burglary JohnCalls P(B).001 Alarm Earthquake MaryCalls John also calls 5% of the time when there is no Alarm. So over 1,000 days we expect 1 Burglary and John will probably call. However, he will also call with a false report 50 times on average. So the call is about 50 times more likely a false report: P(Burglary JohnCalls) 0.02 A P(J) T.90 F.05 Slide credit: Ray Mooney

Bayesian Curve Fitting (1) Polynomial

Bayesian Curve Fitting (2) Plate

Bayesian Curve Fitting (3) Input variables and explicit hyperparameters

Bayesian Curve Fitting Learning Condition on data

Bayesian Curve Fitting Prediction Predictive distribution: where

Generative vs Discriminative Models Generative approach: Model Use Bayes theorem Discriminative approach: Model directly

Generative Models Causal process for generating images

Discrete Variables (1) General joint distribution: K 2 { 1 parameters Independent joint distribution: 2(K { 1) parameters

Discrete Variables (2) General joint distribution over M variables: K M { 1 parameters M -node Markov chain: K { 1 + (M { 1)K(K { 1) parameters

Discrete Variables: Bayesian Parameters (1)

Discrete Variables: Bayesian Parameters (2) Shared prior

Conditional Independence a is independent of b given c Equivalently Notation

Conditional Independence: Example 1 Node c is tail to tail for path from a to b: path makes a and b dependent

Conditional Independence: Example 1 Node c is tail to tail for path from a to b: c blocks the path thus making a and b conditionally independent

Conditional Independence: Example 2 Node c is head to tail for path from a to b: path makes a and b dependent

Conditional Independence: Example 2 Node c is head to tail for path from a to b: c blocks the path thus making a and b conditionally independent

Conditional Independence: Example 3 Node c is head to head for path from a to b: c blocks the path thus making a and b independent Note: this is the opposite of Example 1, with c unobserved.

Conditional Independence: Example 3 Node c is head to head for path from a to b: c unblocks the path thus making a and b conditionally dependent Note: this is the opposite of Example 1, with c observed.

Am I out of fuel? B = F = G = Battery (0=flat, 1=fully charged) Fuel Tank (0=empty, 1=full) Fuel Gauge Reading (0=empty, 1=full) and hence

Am I out of fuel? Probability of an empty tank increased by observing G = 0.

Am I out of fuel? Probability of an empty tank reduced by observing B = 0. This referred to as explaining away.

D-separation A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked if it contains a node such that either a) the arrows on the path meet either head-to-tail or tailto-tail at the node, and the node is in the set C, or b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C. If all paths from A to B are blocked, A is said to be d- separated from B by C. If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies.

D-separation: Example

D-separation: I.I.D. Data The x i s conditionally independent. Are the x i s marginally independent?

Naïve Bayes Conditioned on the class z, the distributions of the input variables x 1,, x D are independent. Are the x 1,, x D marginally independent?

The Markov Blanket Factors independent of x i cancel between numerator and denominator. The parents, children and co-parents of x i form its Markov blanket, the minimal set of nodes that isolate x i from the rest of the graph.

Bayes Nets vs. Markov Nets Bayes nets represent a subclass of joint distributions that capture non-cyclic causal dependencies between variables. A Markov net can represent any joint distribution. Slide credit: Ray Mooney

Markov Chains In general: First-order Markov chain:

Markov Chains: Second-order Markov chain:

Markov Random Fields Undirected graph over a set of random variables, where an edge represents a dependency. The Markov blanket of a node, X, in a Markov Net is the set of its neighbors in the graph (nodes that have an edge connecting to X). Every node in a Markov Net is conditionally independent of every other node given its Markov blanket. Slide credit: Ray Mooney

Markov Random Fields Markov Blanket A node is conditionally independent of all other nodes conditioned only on the neighboring nodes.

Cliques and Maximal Cliques Clique Maximal Clique

Distribution for a Markov Network The distribution of a Markov net is most compactly described in terms of a set of potential functions, φ k, for each clique, k, in the graph. For each joint assignment of values to the variables in clique k, φ k assigns a non-negative real value that represents the compatibility of these values. The joint distribution of a Markov is then defined by: 1 P x, x,... x ) ( x ) ( 1 2 n k { k} Z k where x {k} represents the joint assignment of the variables in clique k, and Z is a normalizing constant that makes a joint distribution that sums to 1. Z x ) x k k ( { k} Slide credit: Ray Mooney

Illustration: Image De-Noising (1) Original Image Noisy Image

Illustration: Image De-Noising (2) y i in {+1, -1}: labels in observed noisy image, x i in {+1, -1}: labels in noise-free image, i is the index over pixels

Illustration: Image De-Noising (3) Noisy Image Restored Image (ICM)