Forward algorithm vs. particle filtering

Similar documents
CS 188: Artificial Intelligence Fall 2011

Introduction to AI Learning Bayesian networks. Vibhav Gogate

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence. Machine Learning

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

CS 188: Artificial Intelligence Spring Today

CSE446: Naïve Bayes Winter 2016

CSEP 573: Artificial Intelligence

Naïve Bayes classification

Naïve Bayes. Vibhav Gogate The University of Texas at Dallas

CS 188: Artificial Intelligence Spring Announcements

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Approximate Inference

CSE546: Naïve Bayes Winter 2012

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Bayesian Methods: Naïve Bayes

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CSE 473: Artificial Intelligence Autumn Topics

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Bayes Classifiers. CAP5610 Machine Learning Instructor: Guo-Jun QI

CS 188: Artificial Intelligence Spring 2009

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Markov Chains and Hidden Markov Models

CS 343: Artificial Intelligence

The Naïve Bayes Classifier. Machine Learning Fall 2017

CS 188: Artificial Intelligence Fall Recap: Inference Example

8: Hidden Markov Models

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

CSE 473: Artificial Intelligence

Bayesian Networks BY: MOHAMAD ALSABBAGH

HMM part 1. Dr Philip Jackson

Markov Models and Hidden Markov Models

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Introduction to Artificial Intelligence (AI)

CS 188: Artificial Intelligence Fall 2008

General Naïve Bayes. CS 188: Artificial Intelligence Fall Example: Overfitting. Example: OCR. Example: Spam Filtering. Example: Spam Filtering

STA 4273H: Statistical Machine Learning

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Mini-project 2 (really) due today! Turn in a printout of your work at the end of the class

STA 414/2104: Machine Learning

COS 424: Interacting with Data. Lecturer: Dave Blei Lecture #11 Scribe: Andrew Ferguson March 13, 2007

CS 5522: Artificial Intelligence II

Dynamic Approaches: The Hidden Markov Model

Final Examination CS 540-2: Introduction to Artificial Intelligence

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

CS 5522: Artificial Intelligence II

Conditional Random Field

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

order is number of previous outputs

Undirected Graphical Models

Generative Clustering, Topic Modeling, & Bayesian Inference

CS 188: Artificial Intelligence

Probabilistic Classification

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Introduc)on to Bayesian methods (con)nued) - Lecture 16

Probabilistic Graphical Models

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Intelligent Systems (AI-2)

Hidden Markov Models in Language Processing

Machine Learning. Classification. Bayes Classifier. Representing data: Choosing hypothesis class. Learning: h:x a Y. Eric Xing

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

CSE 473: Ar+ficial Intelligence

Mixture Models and Expectation-Maximization

CS221 / Autumn 2017 / Liang & Ermon. Lecture 15: Bayesian networks III

Mathematical Formulation of Our Example

A brief introduction to Conditional Random Fields

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Classification & Information Theory Lecture #8

CPSC 340: Machine Learning and Data Mining

MIRA, SVM, k-nn. Lirong Xia

Classification: Naïve Bayes. Nathan Schneider (slides adapted from Chris Dyer, Noah Smith, et al.) ENLP 19 September 2016

Sparse Models for Speech Recognition

Temporal Modeling and Basic Speech Recognition

Hidden Markov Models,99,100! Markov, here I come!

Introduction to Machine Learning

Basic math for biology

Probability models for machine learning. Advanced topics ML4bio 2016 Alan Moses

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

STA 4273H: Statistical Machine Learning

Density Estimation. Seungjin Choi

Probability Based Learning

MLE/MAP + Naïve Bayes

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

Midterm sample questions

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Machine Learning Overview

PATTERN RECOGNITION AND MACHINE LEARNING

Statistical Machine Learning from Data

Today. Statistical Learning. Coin Flip. Coin Flip. Experiment 1: Heads. Experiment 1: Heads. Which coin will I use? Which coin will I use?

Introduction to Support Vector Machines

MLE/MAP + Naïve Bayes

Introduction to Machine Learning

Transcription:

Particle Filtering ØSometimes X is too big to use exact inference X may be too big to even store B(X) E.g. X is continuous X 2 may be too big to do updates ØSolution: approximate inference Track samples of X, not all values Samples are called particles Time per step is linear in the number of samples But: number needed may be large In memory: list of particles ØThis is how robot localization works in practice 1

Forward algorithm vs. particle filtering Forward algorithm ØElapse of time B (X t )=Σ xt-1 p(x t x t-1 )B(x t-1 ) ØObserve B(X t ) p(e t X t )B (X t ) ØRenormalize B(x t ) sum up to 1 Particle filtering Elapse of time x--->x Observe w(x )=p(e t x) Resample resample N particles 2

Today ØSpeech recognition A massive HMM! ØIntroduction to machine learning 3

Speech and Language ØSpeech technologies Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Ø Language processing technologies Machine translation Information extraction Web search, question answering Text classification, spam filtering, etc 4

Digitizing Speech 5

The Input ØSpeech input is an acoustic wave form Graphs from Simon Arnfield s web tutorial on speech, sheffield: http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/ 6

7

The Input ØFrequency gives pitch; amplitude gives volume Sampling at ~8 khz phone, ~16 khz mic ØFourier transform of wave displayed as a spectrogram Darkness indicates energy at each frequency 8

Acoustic Feature Sequence ØTime slices are translated into acoustic feature vectors (~39 real numbers per slice) ØThese are the observations, now we need the hidden states X 9

State Space Øp(E X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) Øp(X X ) encodes how sounds can be strung together ØWe will have one state for each sound in each word ØFrom some state x, can only: Stay in the same state (e.g. speaking slowly) Move to the next position in the word At the end of the word, move to the start of the next word ØWe build a little state graph for each word and chain them together to form our state space X 10

HMMs for Speech 11

Transitions with Bigrams 12

Decoding ØWhile there are some practical issues, finding the words given the acoustics is an HMM inference problem ØWe want to know which state sequence x 1:T is most likely given the evidence e 1:T : 1: T 1: T * = arg max 1: T 1: T 1: T x x p x e x ( e ) = arg max p x, 1: T 1: T ØFrom the sequence x, we can simply read off the words 13

Machine Learning ØUp until now: how to reason in a model and how to make optimal decisions ØMachine learning: how to acquire a model on the basis of data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering) 14

Parameter Estimation ØEstimating the distribution of a random variable ØElicitation: ask a human ØEmpirically: use training data (learning!) E.g.: for each outcome x, look at the empirical rate of that value: p x = ML count( x) total samples This is the estimate that maximizes the likelihood of the data L x p x, θ i i θ = 13 p r = ML 15

Estimation: Smoothing ØRelative frequencies are the maximum likelihood estimates (MLEs) θ = arg max p X θ ML θ = arg max θ i p θ X i p x = count( x) total samples ØIn Bayesian statistics, we think of the parameters as just another random variable, with its own distribution θ arg max ( ) MAP = p θ X = = θ arg max p X θ p θ p X θ arg max p X θ θ ML p θ???? 16

Estimation: Laplace Smoothing ØLaplace s estimate: Pretend you saw every outcome once more than you actually did c x p LAP ( x) = +1! " c x x = c ( x ) +1 N + X +1 # $ p p ML LAP X X = = 17

Estimation: Laplace Smoothing Ø Laplace s estimate (extended): Pretend you saw every outcome k extra times What s Laplace with k=0? k is the strength of the prior ØLaplace for conditionals: Smooth each condition p p independently: LAP, k LAP k, x = x y = c x N + + k k X c x, y c y + k + k X p p p LAP,0 LAP,1 LAP,100 X X X = = = 18

Example: Spam Filter Ø Input: email Ø Output: spam/ham Ø Setup: Get a large collection of example emails, each labeled spam or ham Note: someone has to hand label all this data! Want to learn to predict labels of new, future emails Ø Features: the attributes used to make the ham / spam decision Words: FREE! Text patterns: $dd, CAPS Non-text: senderincontacts 19

Example: Digit Recognition Ø Input: images / pixel grids Ø Output: a digit 0-9 Ø Setup: Get a large collection of example images, each labeled with a digit Note: someone has to hand label all this data! Want to learn to predict labels of new, future digit images Ø Features: the attributes used to make the digit decision Pixels: (6,8) = ON Shape patterns: NumComponents, AspectRation, NumLoops 20

ØInput: pixel grids A Digit Recognizer ØOutput: a digit 0-9 21

ØSimple version: Naive Bayes for Digits One feature F ij for each grid position <i,j> Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image Each input maps to a feature vector, e.g. F 0,0 = 0 F 0,1 = 0 F 0,2 =1 F 0,3 =1 F 0,4 = 0 F 15,15 = 0 Here: lots of features, each is binary valued ØNaive Bayes model: p( Y F 0,0 F ) 15,15 p Y Ø What do we need to learn? p( F i, j Y ) i, j 22

General Naive Bayes ØA general naive Bayes model: Y F n parameters = p( F Y ) i p Y,F F 1 n p Y Y parameters i n Y F parameters ØWe only specify how each feature depends on the class ØTotal number of parameters is linear in n 23

Inference for Naive Bayes Ø Goal: compute posterior over causes Step 1: get joint probability of causes and evidence = p Y, f 1 f n! # p y 1, f 1 f n # # # # # p y k, f 1 f " n p( y 2, f 1 f ) n Step 2: get probability of evidence Step 3: renormalize $ & & & & & & % " $ $ $ $ $ $ $ # $ p( f i c ) 1 p y 1 p y 2 i p( f i c ) 2 p y k i p( f i c ) k i p f 1 f n p( Y f 1 f ) n % ' ' ' ' ' ' ' &' 24

General Naive Bayes ØWhat do we need in order to use naive Bayes? Inference (you know this part) Start with a bunch of conditionals, p(y) and the p(f i Y) tables Use standard inference to compute p(y F 1 F n ) Nothing new here Estimates of local conditional probability tables p(y), the prior over labels p(f i Y) for each feature (evidence variable) These probabilities are collectively called the parameters of the model and denoted by θ Up until now, we assumed these appeared by magic, but they typically come from training data: we ll look at this now 25

Examples: CPTs py pf ( = ony) 3,1 ( = on Y) p F 5,5 1 0.1 2 0.1 3 0.1 4 0.1 5 0.1 6 0.1 7 0.1 8 0.1 9 0.1 0 0.1 1 0.01 2 0.05 3 0.05 4 0.30 5 0.80 6 0.90 7 0.05 8 0.60 9 0.50 0 0.80 1 0.05 2 0.01 3 0.90 4 0.80 5 0.90 6 0.90 7 0.25 8 0.85 9 0.60 0 0.80 26

Important Concepts Ø Data: labeled instances, e.g. emails marked spam/ham Training set Held out set Test set Ø Features: attribute-value pairs which characterize each x Ø Experimentation cycle Learn parameters (e.g. model probabilities) on training set (Tune hyperparameters on held-out set) Compute accuracy of test set Very important: never peek at the test set! Ø Evaluation Accuracy: fraction of instances predicted correctly Ø Overfitting and generalization Want a classifier which does well on test data Overfitting: fitting the training data very closely, but not generalizing well 27