Artificial Intelligence: Cognitive Agents

Similar documents
Bayesian Networks Practice

Bayesian Networks BY: MOHAMAD ALSABBAGH

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

p L yi z n m x N n xi

Lecture 6: Graphical Models: Learning

Chris Bishop s PRML Ch. 8: Graphical Models

Machine Learning Summer School

Bayesian Models in Machine Learning

Introduction to Probabilistic Graphical Models

ECE521 Tutorial 11. Topic Review. ECE521 Winter Credits to Alireza Makhzani, Alex Schwing, Rich Zemel and TAs for slides. ECE521 Tutorial 11 / 4

Bayesian Networks Practice

Directed Graphical Models

Machine Learning Lecture 14

Graphical Models and Kernel Methods

Probabilistic Graphical Models (I)

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

PMR Learning as Inference

Lecture 1: Probabilistic Graphical Models. Sam Roweis. Monday July 24, 2006 Machine Learning Summer School, Taiwan

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

A graph contains a set of nodes (vertices) connected by links (edges or arcs)

An Introduction to Bayesian Machine Learning

Conditional Independence

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Artificial Intelligence Bayes Nets: Independence

Probabilistic Graphical Networks: Definitions and Basic Results

{ p if x = 1 1 p if x = 0

STA 4273H: Statistical Machine Learning

Introduction to Bayesian Learning

Graphical Models - Part I

Grundlagen der Künstlichen Intelligenz

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

Bayesian Machine Learning

CS 5522: Artificial Intelligence II

CPSC 540: Machine Learning

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Learning in Bayesian Networks

Pattern Recognition and Machine Learning

Probabilistic Graphical Models for Image Analysis - Lecture 1

13: Variational inference II

6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Chapter 16. Structured Probabilistic Models for Deep Learning

Probabilistic Graphical Models. Guest Lecture by Narges Razavian Machine Learning Class April

Probabilistic Graphical Models: Representation and Inference

CSC321 Lecture 18: Learning Probabilistic Models

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Directed and Undirected Graphical Models

Naïve Bayes classification

Bayesian Networks Inference with Probabilistic Graphical Models

Directed Graphical Models or Bayesian Networks

Conditional Independence and Factorization

Probability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Based on slides by Richard Zemel

Undirected Graphical Models

Bayesian Networks. Motivation

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Introduction to Artificial Intelligence. Unit # 11

Probabilistic Models. Models describe how (a portion of) the world works

CS839: Probabilistic Graphical Models. Lecture 7: Learning Fully Observed BNs. Theo Rekatsinas

STA 4273H: Statistical Machine Learning

NPFL108 Bayesian inference. Introduction. Filip Jurčíček. Institute of Formal and Applied Linguistics Charles University in Prague Czech Republic

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Probability and Information Theory. Sargur N. Srihari

Bayesian Methods in Artificial Intelligence

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

A Tutorial on Learning with Bayesian Networks

Directed Probabilistic Graphical Models CMSC 678 UMBC

Statistical Approaches to Learning and Discovery

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Directed and Undirected Graphical Models

Lecture : Probabilistic Machine Learning

Undirected Graphical Models: Markov Random Fields

STA 4273H: Statistical Machine Learning

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Bayesian Methods for Machine Learning

Introduction to Probabilistic Machine Learning

CS 188: Artificial Intelligence Spring Announcements

CSE 473: Artificial Intelligence Autumn Topics

Bayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1

Graphical Models. Andrea Passerini Statistical relational learning. Graphical Models

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

ECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning

Probabilistic Graphical Models

Graphical Models 359

Probabilistic Models

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech

Covariance. if X, Y are independent

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

Learning Bayesian network : Given structure and completely observed data

Introduction to Probability and Statistics (Continued)

PROBABILITY DISTRIBUTIONS. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

Bayes Nets: Independence

Probabilistic Graphical Models

T Machine Learning: Basic Principles

Machine Learning using Bayesian Approaches

Machine Learning. Probability Basics. Marc Toussaint University of Stuttgart Summer 2014

Representation. Stefano Ermon, Aditya Grover. Stanford University. Lecture 2

Lecture 8: Bayesian Networks

Transcription:

Artificial Intelligence: Cognitive Agents AI, Uncertainty & Bayesian Networks 2015-03-10 / 03-12 Kim, Byoung-Hee Biointelligence Laboratory Seoul National University http://bi.snu.ac.kr

A Bayesian network is a graphical model for probabilistic relationships among a set of variables 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 2

ACM Turing Award Nobel Prize in Computing 2011 Winner: Judea Pearl (UCLA) For fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning Invention of Bayesian networks Pearl's accomplishments have redefined the term 'thinking machine over the past 30 years BN mimics the neural activities of the human brain, constantly exchanging messages without benefit of a supervisor 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 3

Notes Related chapters in the textbook (AIMA 3 rd ed. by Russell and Norvig) Ch. 13 Quantifying Uncertainty Ch. 14 Probabilistic Reasoning (14.1~14.2) Ch. 20 Learning Probabilistic Models (20.1~20.2) On the reference A Tutorial on Learning with Bayesian Networks by David Heckerman It is very technical but covers insights and comprehensive backgrounds on Bayesian networks This lecture covers the Introduction section This lecture is an easier introductory tutorial Both contents in the textbook and Heckerman s tutorial is fairly mathematical This lecture covers basic concepts and tools to understand Bayesian networks 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 4

Contents Bayesian Networks: Introduction Motivating example Decomposing a joint distribution of variables d-separation A mini Turing test in causal conversation Correlation & causation AI & Uncertainty Bayesian Networks in Detail d-separation: revisited & details Probability & Bayesian Inference in & learning Bayesian networks BN as AI tools and advantages 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 5

Bayesian Networks: Introduction

Causality, Dependency From correlation to causality 정성적방법 Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 정량적방법 Granger causality index http://www.google.com/trends/ Google 의 CausalImpact R package for causal inference in time series Official posting: http://googleopensource.blogspot.kr/2014/09/causalimpact-new-opensource-package.html 소개기사 ( 영문 ): https://gigaom.com/2014/09/11/googlehas-open-sourced-a-tool-for-inferring-cause-fromcorrelations/ 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 7

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference Assuming binary states for all the variables Ex) Season: dry or rainy Ex) Sprinkler: ON or OFF 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 8

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 9

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference (directed acyclic graph) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 10

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 11

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 12

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 13

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 14

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 15

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 16

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 17

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 18

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 19

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 20

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 21

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 22

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 23

Slide from K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 24

Correlation & Causation Correlation does not imply causation A chart that, correlates the number of pirates with global temperature. The two variables are correlated, but one does not imply the other a correlation between ice cream consumption and crime, but shows that the actual cause is temperature. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 25

Example Designing a Bayesian Network My own design of conditional probability tables SEASON DRY 0.6 SPRINKLER SEASON DRY RAINY DRY 0.8 0.1 RAINY 0.2 0.9 RAINY 0.4 WET RAIN SEASON DRY RAINY YES 0.3 0.8 NO 0.7 0.2 SPRINKLER ON OFF RAIN YES NO YES NO YES 0.8 0.1 0.8 0.1 NO 0.2 0.9 0.2 0.9 SLIPPERY WET YES NO YES 0.8 0.1 NO 0.2 0.9 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 26

Example Designing a Bayesian Network Tool: GeNIe (https://dslpitt.org/genie/) GeNIe (Graphical Network Interface) is the graphical interface to SMILE, a fully portable Bayesian inference engine in C++ Inference based on the designed Bayesian Network Q1 A1 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 27

AI & Uncertainty

Probability Probability plays a central role in modern pattern recognition. The main tool to deal uncertainties All of the probabilistic inference and learning amount to repeated application of the sum rule and the product rule Random Variables: variables + probability 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 29

Artificial Intelligence (AI) The objective of AI is to build intelligent computers We want intelligent, adaptive, robust behavior cat car Often hand programming not possible. Solution? Get the computer to program itself, by showing it examples of the behavior we want! This is the learning approach to AI. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 30

Artificial Intelligence (AI) (Traditional) AI Knowledge & reasoning; work with facts/assertions; develop rules of logical inference Planning: work with applicability/effects of actions; develop searches for actions which achieve goals/avert disasters. Expert systems: develop by hand a set of rules for examining inputs, updating internal states and generating outputs 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 31

Artificial Intelligence (AI) Probabilistic AI emphasis on noisy measurements, approximation in hard cases, learning, algorithmic issues. The power of learning Automatic system building old expert systems needed hand coding of knowledge and of output semantics learning automatically constructs rules and supports all types of queries Probabilistic databases traditional DB technology cannot answer queries about items that were never loaded into the dataset UAI models are like probabilistic databases 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 32

Uncertainty and Artificial Intelligence (UAI) Probabilistic methods can be used to: make decisions given partial information about the world account for noisy sensors or actuators explain phenomena not part of our models describe inherently stochastic behavior in the world 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 33

Other Names for UAI Machine learning (ML), data mining, applied statistics, adaptive (stochastic) signal processing, probabilistic planning/reasoning... Some differences: Data mining almost always uses large data sets, statistics almost always small ones Data mining, planning, decision theory often have no internal parameters to be learned Statistics often has no algorithm to run! ML/UAI algorithms are rarely online and rarely scale to huge data (changing now). 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 34

Learning is most useful Learning in AI when the structure of the task is not well understood but can be characterized by a dataset with strong statistical regularity Also useful in adaptive or dynamic situations when the task (or its parameters) are constantly changing Currently, these are challenging topics of machine learning and data mining research 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 35

Probabilistic AI Let inputs=x, correct answers=y, outputs of our machine=z Learning: estimation of p(x, Y) The central object of interest is the joint distribution The main difficulty is compactly representing it and robustly learning its shape given noisy samples 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 36

Probabilistic Graphical Models (PGMs) Probabilistic graphical models represent large joint distributions compactly using a set of local relationships specified by a graph Each random variable in our model corresponds to a graph node. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 37

Probabilistic Graphical Models (PGMs) There are useful properties in using probabilistic graphical models A simple way to visualize the structure of a probabilistic model Insights into the properties of the model Complex computations (for inference and learning) can be expressed in terms of graphical manipulations underlying mathematical expressions 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 38

Directed graph vs. undirected graph Both (probabilistic) graphical models Specify a factorization (how to express the joint distribution) Define a set of conditional independence properties Parent - child Local conditional distribution Maximal clique Potential function Bayesian Networks (BN) Markov Random Field (MRF) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 39

Bayesian Networks in Detail

(DAG) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 41

Designing a Bayesian Network Model TakeHeart II: Decision support system for clinical cardiovascular risk assessment 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 42

Inference in a Bayesian Network Model Given an assignment of a subset of variables (evidence) in a BN, estimate the posterior distribution over another subset of unobserved variables of interest. Inferences viewed as message passing along the network 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 43

Bayesian Networks The joint distribution defined by a graph is given by the product of a conditional distribution of each node conditioned on their parent nodes. p(x) = K k = 1 p( x Pa( k x k )) (PP(x k ) denotes the set of parents of x k ) ex) p x 1, x 2,, x 7 = * Without given DAG structure, usual chain rule can be applied to get the joint distribution. But computational cost is much higher. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 44

Bayes Theorem Likelihood py ( X) = p( X Y) py ( ) px ( ) Prior Posterior Normalizing constant px ( ) = px ( Y) py ( ) Y posterior likelihood prior 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 45

Bayes Theorem Figure from Figure 1. in (Adams, et all, 2013) obtained from http://journal.frontiersin.org/article/10.3389/fpsyt.2013.00047/full 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 46

Likelihood: Frequentist Bayesian Probabilities -Frequentist vs. Bayesian w: a fixed parameter determined by estimator Maximum likelihood: Error function = log p( D w) Error bars: Obtained by the distribution of possible data sets Bootstrap Cross-validation Bayesian p( D w) p( w D) = p( D w) p( w) p( D) Thomas Bayes a probability distribution w: the uncertainty in the parameters Prior knowledge Noninformative (uniform) prior, Laplace correction in estimating priors Monte Carlo methods, variational Bayes, EP D (See an article WHERE Do PROBABILITIES COME FROM? on page 491 in the textbook (Russell and Norvig, 2010) for more discussion) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 47

Conditional Independence Conditional independence simplifies both the structure of a model and the computations An important feature of graphical models is that conditional independence properties of the joint distribution can be read directly from the graph without having to perform any analytical manipulations The general framework for this is called d-separation 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 48

Three example graphs 1 st case None of the variables are observed Node c is tail-to-tail The variable c is observed The conditioned node blocks the path from a to b, causes a and b to become (conditionally) independent. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 49

Three example graphs 2 nd case None of the variables are observed Node c is head-to-tail The variable c is observed The conditioned node blocks the path from a to b, causes a and b to become (conditionally) independent. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 50

Three example graphs 3 rd case None of the variables are observed The variable c is observed Node c is head-to-head When node c is unobserved, it blocks the path and the variables a and b are independent. Conditioning on c unblocks the path and render a and b dependent. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 51

Three example graphs - Fuel gauge example B Battery, F-fuel, G-electric fuel gauge (rather unreliable fuel gauge) Checking the fuel gauge Checking the battery also has the meaning? ( Makes it more likely ) Makes it less likely than observation of fuel gauge only. (explaining away) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 52

d-separation Tail-to-tail node or head-to-tail node Unless it is observed in which case it blocks a path, the path is unblocked. Head-to-head node Blocks a path if is unobserved, but on the node, and/or at least one of its descendants, is observed the path becomes unblocked. d-separation? All paths are blocked. The joint distribution will satisfy conditional independence w.r.t. concerned variables. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 53

d-separation (a) a is dependent to b given c Head-to-head node e is unblocked, because a descendant c is in the conditioning set. Tail-to-tail node f is unblocked (b) a is independent to b given f Head-to-head node e is blocked Tail-to-tail node f is blocked 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 54

d-separation Another example of conditional independence and d-separation: i.i.d. (independent identically distributed) data Problem: finding posterior dist. for the mean of a univariate Gaussian dist. Every path is blocked and so the observations D={x 1,,x N } are independent given (independent) (The observations are in general no longer independent!) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 55

d-separation Naïve Bayes model Key assumption: conditioned on the class z, the distribution of the input variables x 1,, x D are independent. Input {x 1,,x N } with their class labels, then we can fit the naïve Bayes model to the training data using maximum likelihood assuming that the data are drawn independently from the model. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 56

d-separation Markov blanket or Markov boundary When dealing with the conditional distribution of x i, consider the minimal set of nodes that isolates x i from the rest of the graph. The set of nodes comprising parents, children, co-parents is called the Markov blanket. parents Co-parents children 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 57

Probability Distributions Discrete variables Beta, Bernoulli, binomial Dirichlet, multinomial Continuous variables Normal (Gaussian) Student-t beta Dirichlet Exponential family & conjugacy Many probability densities on x can be represented as the same form T { } p( x η) = h( x) g( η)exp η ux ( ) binomial Gaussian There are conjugate family of density functions having the same form of density functions Beta & binomial F beta Dirichlet Dirichlet & multinomial Normal & Normal x binomial multinomial 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 58

Inference in Graphical Models Inference in graphical models Given evidences (some nodes are clamped to observed values) Wish to compute the posterior distributions of other nodes Inference algorithms in graphical structures Main idea: propagation of local messages Exact inference Sum-product algorithm, max-product algorithm, junction tree A algorithm Approximate inference Loopy belief propagation + message passing schedule Variational methods, sampling methods (Monte Carlo methods) B C D E ABD BCD CDE 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 59

Parameters Learning Parameters of Bayesian Networks probabilities in conditional probability tables (CPTs) for all the variables in the network Learning parameters SEASON Assuming that the structure is fixed, i.e. designed or learned. We need data, i.e. observed instances Estimation based on relative frequencies from data + belief Example: coin toss. Estimation of heads in various ways RAIN SEASON DRY RAINY DRY? YES?? RAINY? NO?? 1 2 The principle of indifference: head and tail are equally probable If we tossed a coin 10,000 times and it landed heads 3373 times, we would estimate the probability of heads to be about.3373 P heeee = 1 2 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 60

Learning Parameters of Bayesian Networks Learning parameters (continued) Estimation based on relative frequencies from data + belief Example: A-match soccer game between Korea and Japan. How, do you think, is it probable that Korean would win? A: 0.85 (Korean), B: 0.3 (Japanese) 3 This probability is not a ratio, and it is not a relative frequency because the game cannot be repeated many times under the exact same conditions Degree of belief or subjective probability Usual method Estimate the probability distribution of a variable X based on a relative frequency and belief concerning a relative frequency 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 61

Learning Parameters of Bayesian Networks Simple counting solution (Bayesian point of view) Parameter estimation of a single node Assume local parameter independence For a binary variable (for example, a coin toss) prior: Beta distribution - Beta(a,b) after we have observed m heads and N-m tails posterior - Beta(a+m,b+N-m) and P X = heee = (a+m) N (conjugacy of Beta and Binomial distributions) beta binomial 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 62

Learning Parameters of Bayesian Networks Simple counting solution (Bayesian point of view) For a multinomial variable (for example, a dice toss) prior: Dirichlet distribution Dirichlet(a 1,a 2,, a d ) P X = k = a k N N = a k Observing state i: Dirichlet(a 1,,a i +1,, a d ) (conjugacy of Dirichlet and Multinomial distributions) For an entire network We simply iterate over its nodes In the case of incomplete data In real data, many of the variable values may be incorrect or missing Usual approximating solution is given by Gibbs sampling or EM (expectation maximization) technique 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 63

Smoothing Another viewpoint Learning Parameters of Bayesian Networks Laplace smoothing or additive smoothing given observed counts for d states of a variable X = (x 1, x 2, x d ) P X = k = x k + α N + αd i = 1,, d, (α = α 1 = α 2 = α d ) From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter α as a prior. Additive smoothing is commonly a component of naive Bayes classifiers. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 64

Learning the Graph Structure Learning the graph structure itself from data requires A space of possible structures A measure that can be used to score each structure From a Bayesian viewpoint : score for each model Tough points Marginalization over latent variables => challenging computational problem Exploring the space of structures can also be problematic The # of different graph structures grows exponentially with the # of nodes Usually we resort to heuristics Local score based, global score based, conditional independence test based, 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 65

Bayesian Networks as Tools for AI Learning Extracting and encoding knowledge from data Knowledge is represented in Probabilistic relationship among variables Causal relationship Network of variables Common framework for machine learning models Supervised and unsupervised learning Knowledge Representation & Reasoning Bayesian networks can be constructed from prior knowledge alone Constructed model can be used for reasoning based on probabilistic inference methods Expert System Uncertain expert knowledge can be encoded into a Bayesian network DAG in a Bayesian network is hand-constructed by domain experts Then the conditional probabilities were assessed by the expert, learned from data, or obtained using a combination of both techniques. Bayesian network-based expert systems are popular Planning In some different form, known as decision graphs or influence diagrams We don t cover about this direction 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 66

Advantages of Bayesian Networks for Data Analysis Ability to handle missing data Because the model encodes dependencies among all variables Learning causal relationships Can be used to gain understanding about a problem domain Can be used to predict the consequences of intervention Having both causal and probabilistic semantics It is an ideal representation for combining prior knowledge (which comes in causal form) and data Efficient and principled approach for avoiding the overfitting of data By Bayesian statistical methods in conjunction with Bayesian networks (summary from the abstract of D. Heckerman s Tutorial on BN) (Read Introduction section for detailed explanations) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 67

References K. Mohan & J. Pearl, UAI 12 Tutorial on Graphical Models for Causal Inference S. Roweis, MLSS 05 Lecture on Probabilistic Graphical Models Chapter 1, Chapter 2, Chapter 8 (Graphical Models), in Pattern Recognition and Machine Learning by C.M. Bishop, 2006. David Heckerman, A Tutorial on Learning with Bayesian Networks. R.E. Neapolitan, Learning Bayesian Networks, Pearson Prentice Hall, 2004. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 68

More Textbooks and Courses https://www.coursera.org/course/pgm : Probabilistic Graphical Models by D. Koller 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 69

APPENDIX 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 70

Learning Parameters of Bayesian Networks F The probability distribution of F represents our belief concerning the relative frequency with which X equals k. X Case 1: X is a binary variable F: beta distribution, X: Bernoulli or binomial distribution Ex) F ~ Beta(a,b), then P X = 1 = a N (N = a + b) Case 2: X is a multinomial variable F: Dirichlet distribution, X: multinomial distribution Ex) F ~ Dirichlet(a 1,a 2,, a d ), then P X = k = a k N N = a k Laplace smoothing or additive smoothing given observed frequencies X = (x 1, x 2, x d ) P X = k = x k + α i = 1,, d, (α = α N + αd 1 = α 2 = α d ) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 71

Graphical interpretation of Given structure: Bayes theorem pxy (, ) = pxpy ( ) ( x) We observe the value of y Goal: infer the posterior distribution over x, px ( y) Marginal distribution px ( ) : a prior over the latent variable x py ( ) We can evaluate the marginal distribution py ( ) = py ( x') px ( ') x' (b) By Bayes theorem we can calculate (a) px ( y) = py ( xpx ) ( ) py ( ) (c) 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 72

d-separation Directed factorization Filtering whether can be expressed in terms of the factorization implied by the graph? If we present to the filter the set of all possible distributions p(x) over the set of variables X, then the subset of distributions that are passed by the filter will be denoted DF (Directed Factorization) Fully connected graph: The set DF will contain all possible distributions Fully disconnected graph: The joint distributions which factorize into the product of the marginal distributions over the variables only. 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 73

Gaussian distribution N 1 1 ( x µσ, ) = exp ( x µ ) (2 πσ ) 2σ 2 2 2 1/2 2 Multivariate Gaussian N 1 1 1 x μ Σ = x μ Σ x μ (2 π ) Σ 2 T 1 (, ) exp ( ) ( ) D/2 1/2 2014-2015, SNU CSE Biointelligence Lab., http://bi.snu.ac.kr 74