CS 343: Artificial Intelligence

Similar documents
CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2011

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CS 188: Artificial Intelligence Spring Announcements

CSE 473: Artificial Intelligence

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CS 343: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

CSEP 573: Artificial Intelligence

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

CSE 473: Ar+ficial Intelligence

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Markov Models and Hidden Markov Models

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CS 188: Artificial Intelligence Spring 2009

Markov Chains and Hidden Markov Models

Forward algorithm vs. particle filtering

CS 343: Artificial Intelligence

Approximate Inference

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

CS 343: Artificial Intelligence

CSE 473: Artificial Intelligence Spring 2014

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

CS 188: Artificial Intelligence Fall Recap: Inference Example

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence

Bayes Nets: Sampling

CS 343: Artificial Intelligence

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Artificial Intelligence Bayes Nets: Independence

CS 188: Artificial Intelligence. Our Status in CS188

Introduction to Artificial Intelligence (AI)

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Our Status. We re done with Part I Search and Planning!

Bayes Nets: Independence

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Introduction to Artificial Intelligence (AI)

Product rule. Chain rule

CS 188: Artificial Intelligence Fall Announcements

Reasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence

Bayesian Networks BY: MOHAMAD ALSABBAGH

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Our Status in CSE 5522

CS 188: Artificial Intelligence Spring Announcements

Bayes Net Representation. CS 188: Artificial Intelligence. Approximate Inference: Sampling. Variable Elimination. Sampling.

PROBABILISTIC REASONING OVER TIME

CS 188: Artificial Intelligence Spring Announcements

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

CS 5522: Artificial Intelligence II

Mini-project 2 (really) due today! Turn in a printout of your work at the end of the class

Reinforcement Learning Wrap-up

CS 188: Artificial Intelligence Spring Announcements

STA 4273H: Statistical Machine Learning

CS 188: Artificial Intelligence. Bayes Nets

Hidden Markov models 1

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Professor Wei-Min Shen Week 8.1 and 8.2

Extensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks.

CS532, Winter 2010 Hidden Markov Models

Artificial Intelligence

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Hidden Markov Models (recap BNs)

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

COMP90051 Statistical Machine Learning

Today. Recap: Reasoning Over Time. Demo Bonanza! CS 188: Artificial Intelligence. Advanced HMMs. Speech recognition. HMMs. Start machine learning

Recap: Bayes Nets. CS 473: Artificial Intelligence Bayes Nets: Independence. Conditional Independence. Bayes Nets. Independence in a BN

CS 188: Artificial Intelligence Fall 2011

Temporal probability models. Chapter 15, Sections 1 5 1

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

Today. CS 188: Artificial Intelligence. Recap: Reasoning Over Time. Particle Filters and Applications of HMMs. HMMs

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

Temporal probability models. Chapter 15

Template-Based Representations. Sargur Srihari

Introduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs

Bayes Nets III: Inference

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Final Exam December 12, 2017

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Intelligent Systems (AI-2)

Probabilistic Models. Models describe how (a portion of) the world works

Statistical NLP Spring Digitizing Speech

Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

STA 414/2104: Machine Learning

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

CSEP 573: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2009

Lecture 3: ASR: HMMs, Forward, Viterbi

CS 7180: Behavioral Modeling and Decision- making in AI

Intelligent Systems (AI-2)

Transcription:

CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Recap: Reasoning Over Time Markov models 0.3 0.7 X 1 X 3 X 4 rain sun X 2 0.7 0.3 Hidden Markov models X 1 X 2 X 3 X 4 X 5 E 1 E 2 E 3 E 4 E 5 X E P rain umbrella 0.9 rain no umbrella 0.1 sun umbrella 0.2 sun no umbrella 0.8

Recap: Forward Algo - Passage of Time Assume we have current belief P(X evidence to date) X 1 X 2 Then, after one time step passes: Or compactly: Basic idea: beliefs get pushed through the transitions With the B notation, we have to be careful about what time step t the belief is about, and what evidence it includes

Recap: Forward Algo - Passage of Time As time passes, uncertainty accumulates (Transition model: ghosts usually go clockwise) T = 1 T = 2 T = 5

Recap: Forward Algo - Observation Assume we have current belief P(X previous evidence): X 1 Then, after evidence comes in: E 1 Or, compactly: Basic idea: beliefs reweighted by likelihood of evidence Unlike passage of time, we have to renormalize

Recap: Forward Algo - Observation As we get observations, beliefs get reweighted, uncertainty decreases Before observation After observation

Recap: The Forward Algorithm We are given evidence at each time and want to know We can derive the following updates We can normalize as we go if we want to have P(x e) at each time step, or just once at the end

Recap: Online Filtering w/ Forward Algo Elapse time: compute P( X t e 1:t-1 ) Observe: compute P( X t e 1:t ) Belief: <P(rain), P(sun)> X 1 X 2 <0.5, 0.5> Prior on X 1 <0.82, 0.18> Observe E 1 E 2 <0.63, 0.37> Elapse time <0.88, 0.12> Observe

Particle Filtering

Particle Filtering Filtering: approximate solution Sometimes X is too big to use exact inference X may be too big to even store B(X) E.g. X is continuous Solution: approximate inference Track samples of X, not all values Samples are called particles Time per step is linear in the number of samples But: number needed may be large In memory: list of particles, not states 0.0 0.1 0.0 0.0 0.0 0.2 0.0 0.2 0.5 This is how robot localization works in practice Particle is just new name for sample

Representation: Particles Our representation of P(X) is now a list of N particles (samples) Generally, N << X ( but not in project 4) Storing map from X to counts would defeat the point P(x) approximated by number of particles with value x So, many x may have P(x) = 0! More particles, more accuracy For now, all particles have a weight of 1 Particle filtering uses three repeated steps: Elapse time and observe (similar to exact filtering) and resample Particles: (3,3) (2,3) (3,3) (3,3) (1,2) (3,3) (3,3) (2,3)

Example: Elapse Time Elapse Time Policy: ghosts always move up (or stay in place if already at top)? Belief over possible ghost positions at time t New belief at time t+1

Example: Elapse Time Elapse Time Policy: ghosts always move up (or stay in place if already at top) Belief over possible ghost positions at time t New belief at time t+1

Particle Filtering: Elapse Time Each particle is moved by sampling its next position from the transition model Sample frequencies reflect the transition probabilities Here, most samples move clockwise, but some move in another direction or stay in place This captures the passage of time If enough samples, close to exact values before and after (consistent) Particles: (3,3) (2,3) (3,3) (3,3) (1,2) (3,3) (3,3) (2,3) Particles: (2,3) (3,1) (3,3) (1,3) (2,3) (2,2)

Example: Observe 0.5 0.4 0.3 + 0.4 0.3 0.2? 0.3 0.2 0.1 Belief over possible ghost positions before observation Observation and evidence likelihoods p(e X) New belief after observation

Example: Observe 0.4 0.5 0.4 0.3 + 0.4 0.3 0.2 0.3 0.2 0.1 0.1 Belief over possible ghost positions before observation Observation and evidence likelihoods p(e X) New belief after observation

Particle Filtering: Observe Slightly trickier: Don t sample observation, fix it Similar to likelihood weighting, downweight samples based on the evidence Particles: (2,3) (3,1) (3,3) (1,3) (2,3) (2,2) As before, the probabilities don t sum to one, since all have been downweighted Particles: w=.9 (2,3) w=.2 w=.9 (3,1) w=.4 (3,3) w=.4 w=.9 (1,3) w=.1 (2,3) w=.2 w=.9 (2,2) w=.4

Particle Filtering: Resample Rather than tracking weighted samples, we resample N times, we choose from our weighted sample distribution (i.e. draw with replacement) This is equivalent to renormalizing the distribution Now the update is complete for this time step, continue with the next one Particles: w=.9 (2,3) w=.2 w=.9 (3,1) w=.4 (3,3) w=.4 w=.9 (1,3) w=.1 (2,3) w=.2 w=.9 (2,2) w=.4 (New) Particles: (2,2) (2,3) (3,3) (1,3) (2,3)

Recap: Particle Filtering Particles: track samples of states rather than an explicit distribution Elapse Weight Resample Particles: (3,3) (2,3) (3,3) (3,3) (1,2) (3,3) (3,3) (2,3) Particles: (2,3) (3,1) (3,3) (1,3) (2,3) (2,2) Particles: w=.9 (2,3) w=.2 w=.9 (3,1) w=.4 (3,3) w=.4 w=.9 (1,3) w=.1 (2,3) w=.2 w=.9 (2,2) w=.4 (New) Particles: (2,2) (2,3) (3,3) (1,3) (2,3)

Moderate Number of Particles

One Particle

Huge Number of Particles

Robot Localization In robot localization: We know the map, but not the robot s position Observations may be vectors of range finder readings State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store B(X) Particle filtering is a main technique

Particle Filter Localization (Sonar)

Robot Mapping SLAM: Simultaneous Localization And Mapping We do not know the map or our location State consists of position AND map! Main techniques: Kalman filtering (Gaussian HMMs) and particle methods DP-SLAM, Ron Parr

Particle Filter SLAM Video 1

Dynamic Bayes Nets

Dynamic Bayes Nets (DBNs) We want to track multiple variables over time, using multiple sources of evidence Idea: Repeat a fixed Bayes net structure at each time Variables from time t can condition on those from t-1 t =1 t =2 t =3 G 1 a G 2 a G 3 a G 1 b G 2 b G 3 b E 1 a E 1 b E 2 a E 2 b E 3 a E 3 b Dynamic Bayes nets are a generalization of HMMs

Exact Inference in DBNs Variable elimination applies to dynamic Bayes nets Procedure: unroll the network for T time steps, then eliminate variables until P(X T e 1:T ) is computed t =1 t =2 t =3 G 1 a G 2 a G 3 a G 1 b G 2 b G b 3 E 1 a E 1 b E 2 a E 2 b E 3 a E 3 b Online belief updates: Eliminate all variables from the previous time step; store factors for current time only

DBN Particle Filters A particle is a complete sample for a time step Initialize: Generate prior samples for the t=1 Bayes net Example particle: G a 1 = (3,3) G b 1 = (5,3) Elapse time: Sample a successor for each particle Example successor: G a 2 = (2,3) G b 2 = (6,3) Observe: Weight each entire sample by the likelihood of the evidence conditioned on the sample Likelihood: P(E 1 a G 1 a ) * P(E 1 b G 1 b ) Resample: Select samples (tuples of values) in proportion to their likelihood (weight)

Most Likely Explanation

HMMs: MLE Queries HMMs defined by States X Observations E Initial distribution: Transitions: Emissions: X 1 X 2 X 3 X 4 X 5 E 1 E 2 E 3 E 4 E 5 New query: most likely explanation: New method: the Viterbi algorithm Question: Why not just apply filtering and predict most likely value of each variable separately?

State Trellis State trellis: graph of states and transitions over time sun sun sun sun rain rain rain rain Each arc represents some transition Each arc has weight Each path is a sequence of states The product of weights on a path is that sequence s probability along with the evidence Forward algorithm computes sums of all paths to each node, Viterbi computes best paths Exponentially many paths, but dynamic programming can find best path in linear time!

Forward / Viterbi Algorithms sun sun sun sun rain rain rain rain Forward Algorithm (Sum) Viterbi Algorithm (Max)

Speech Recognition

Speech Recognition in Action

Digitizing Speech

Speech waveforms Speech input is an acoustic waveform s p ee ch l a b Figure: Simon Arnfield, http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/

Spectral Analysis Frequency gives pitch; amplitude gives volume frequency amplitude Sampling at ~8 khz (phone), ~16 khz (mic) (khz=1000 cycles/sec) s p ee ch l a b Fourier transform of wave displayed as a spectrogram Darkness indicates energy at each frequency Human ear figure: depion.blogspot.com

frequency Acoustic Feature Sequence Time slices are translated into acoustic feature vectors (~39 real numbers per slice)..e 12 e 13 e 14 e 15 e 16.. These are the observations E, now we need the hidden states X

Speech State Space HMM Specification P(E X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) P(X X ) encodes how sounds can be strung together State Space We will have one state for each sound in each word Mostly, states advance sound by sound Build a little state graph for each word and chain them together to form the state space X

States in a Word

Transitions with a Bigram Model Training Counts 198015222 the first 194623024 the same 168504105 the following 158562063 the world 14112454 the door ----------------------------------- 23135851162 the *

Decoding Finding the words given the acoustics is an HMM inference problem Which state sequence x 1:T is most likely given the evidence e 1:T? From the sequence x, we can simply read off the words

Next Time: Value of Information