CS 188: Artificial Intelligence Fall 2011

Similar documents
CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

Forward algorithm vs. particle filtering

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence Fall Recap: Inference Example

CS 343: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Markov Chains and Hidden Markov Models

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

CS 188: Artificial Intelligence Spring 2009

CS 188: Artificial Intelligence Spring Announcements

CSEP 573: Artificial Intelligence

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

Introduction to Artificial Intelligence (AI)

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

CS 188: Artificial Intelligence Spring Today

The Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 10: Acoustic Models

Statistical NLP Spring The Noisy Channel Model

Introduction to Artificial Intelligence (AI)

8: Hidden Markov Models

Approximate Inference

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Temporal Modeling and Basic Speech Recognition

CS 188: Artificial Intelligence

Hidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Bayesian Networks BY: MOHAMAD ALSABBAGH

The Noisy Channel Model. CS 294-5: Statistical Natural Language Processing. Speech Recognition Architecture. Digitizing Speech

Statistical NLP Spring Digitizing Speech

Digitizing Speech. Statistical NLP Spring Frame Extraction. Gaussian Emissions. Vector Quantization. HMMs for Continuous Observations? ...

CSE 473: Artificial Intelligence Spring 2014

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Probability and Time: Hidden Markov Models (HMMs)

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

Markov Models and Hidden Markov Models

Hidden Markov models 1

CS532, Winter 2010 Hidden Markov Models

CS 188: Artificial Intelligence. Machine Learning

Hidden Markov Models,99,100! Markov, here I come!

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

Hidden Markov Model and Speech Recognition

CSE 473: Artificial Intelligence Autumn Topics

Artificial Intelligence Markov Chains

COMP90051 Statistical Machine Learning

CS 5522: Artificial Intelligence II

Lecture 3: ASR: HMMs, Forward, Viterbi

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

CSE 473: Ar+ficial Intelligence

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Artificial Intelligence Bayes Nets: Independence

Hidden Markov Models, I. Examples. Steven R. Dunbar. Toy Models. Standard Mathematical Models. Realistic Hidden Markov Models.

CS 188: Artificial Intelligence Fall 2009

Today. Recap: Reasoning Over Time. Demo Bonanza! CS 188: Artificial Intelligence. Advanced HMMs. Speech recognition. HMMs. Start machine learning

Intelligent Systems (AI-2)

STA 4273H: Statistical Machine Learning

1. Markov models. 1.1 Markov-chain

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

LEARNING DYNAMIC SYSTEMS: MARKOV MODELS

PROBABILISTIC REASONING OVER TIME

CS 5522: Artificial Intelligence II

Dynamic Approaches: The Hidden Markov Model

Hidden Markov Models (recap BNs)

Introduction to Hidden Markov Modeling (HMM) Daniel S. Terry Scott Blanchard and Harel Weinstein labs

CSE 473: Artificial Intelligence

Mini-project 2 (really) due today! Turn in a printout of your work at the end of the class

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

Introduction to AI Learning Bayesian networks. Vibhav Gogate

Intelligent Systems (AI-2)

Deep Learning for Speech Recognition. Hung-yi Lee

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

CS 188: Artificial Intelligence Fall 2008

Hidden Markov Models in Language Processing

Design and Implementation of Speech Recognition Systems

Pattern Recognition and Machine Learning

CS 188: Artificial Intelligence Spring Announcements

Fun with weighted FSTs

STA 4273H: Statistical Machine Learning

CS 188: Artificial Intelligence Spring Announcements

8: Hidden Markov Models

CS 343: Artificial Intelligence

Bayes Nets: Independence

Template-Based Representations. Sargur Srihari

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Computational Genomics and Molecular Biology, Fall

Automatic Speech Recognition (CS753)

CS 343: Artificial Intelligence

Probabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov

p(d θ ) l(θ ) 1.2 x x x

CS 188: Artificial Intelligence. Our Status in CS188

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Transcription:

CS 188: Artificial Intelligence Fall 2011 Lecture 20: HMMs / Speech / ML 11/8/2011 Dan Klein UC Berkeley Today HMMs Demo bonanza! Most likely explanation queries Speech recognition A massive HMM! Details of this section not required Start machine learning 3 1

Demo Bonanza! 4 [DEMO: Stationary] Recap: Reasoning Over Time Markov models 0.3 X 1 X 2 X 3 X 4 0.7 0.7 Hidden Markov models 0.3 X 1 X 2 X 3 X 4 X 5 E 1 E 2 E 3 E 4 E 5 X E P umbrella 0.9 no umbrella 0.1 umbrella 0.2 no umbrella 0.8 2

Recap: Filtering [DEMO: Exact Filtering] Elapse time: compute P( X t e 1:t-1 ) Observe: compute P( X t e 1:t ) X 1 X 2 Belief: <P(), P()> <0.5, 0.5> Prior on X 1 <0.82, 0.18> Observe E 1 E 2 <0.63, 0.37> Elapse time <0.88, 0.12> Observe Recap: Particle Filtering [DEMO: Localization / SLAM] Particles: track samples of states rather than an explicit distribution Elapse Weight Resample Particles: (2,3) (1,2) (2,3) Particles: (2,3) (3,1) (1,3) (2,3) (2,2) Particles: w=.9 (2,3) w=.2 w=.9 (3,1) w=.4 w=.4 w=.9 (1,3) w=.1 (2,3) w=.2 w=.9 (2,2) w=.4 (New) Particles: (2,2) (2,3) (1,3) (2,3) 7 3

State Trellis State trellis: graph of states and transitions over time Each arc represents some transition Each arc has weight Each path is a sequence of states The product of weights on a path is the seq s probability Can think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graph 8 Forward / Viterbi Algorithms 9 4

Speech and Language Speech technologies Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Language processing technologies Machine translation Information extraction Web search, question answering Text classification, spam filtering, etc Digitizing Speech 11 5

Speech in an Hour Speech input is an acoustic wave form s p ee ch l a b l to a transition: Graphs from Simon Arnfield s web tutorial on speech, Sheffield: 12 http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/ Spectral Analysis Frequency gives pitch; amplitude gives volume sampling at ~8 khz phone, ~16 khz mic (khz=1000 cycles/sec) s p ee ch l a b Fourier transform of wave displayed as a spectrogram darkness indicates energy at each frequency 13 6

Part of [ae] from lab Complex wave repeating nine times Plus smaller wave that repeats 4x for every large cycle Large wave: freq of 250 Hz (9 times in.036 seconds) Small wave roughly 4 times this, or roughly 1000 Hz [ demo ] 14 Acoustic Feature Sequence Time slices are translated into acoustic feature vectors (~39 real numbers per slice)..e 12 e 13 e 14 e 15 e 16.. These are the observations, now we need the hidden states X 15 7

State Space P(E X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) P(X X ) encodes how sounds can be strung together We will have one state for each sound in each word From some state x, can only: Stay in the same state (e.g. speaking slowly) Move to the next position in the word At the end of the word, move to the start of the next word We build a little state graph for each word and chain them together to form our state space X 16 HMMs for Speech 17 8

Transitions with Bigrams Ting Counts 198015222 the first 194623024 the same 168504105 the following 158562063 the world 14112454 the door ----------------- 23135851162 the * Figure from Huang et al page 618 Decoding While there are some practical issues, finding the words given the acoustics is an HMM inference problem We want to know which state sequence x 1:T is most likely given the evidence e 1:T : From the sequence x, we can simply read off the words 19 9

End of Part II! Now we re done with our unit on probabilistic reasoning Last part of class: machine learning 20 Machine Learning Up until now: how to reason in a model and how to make optimal decisions Machine learning: how to acquire a model on the basis of data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering) 10

Parameter Estimation r g g r g g r g g r r g g g g Estimating the distribution of a random variable Elicitation: ask a human (why is this hard?) Empirically: use ting data (learning!) E.g.: for each outcome x, look at the empirical rate of that value: r r g This is the estimate that maximizes the likelihood of the data Estimation: Smoothing Relative frequencies are the maximum likelihood estimates In Bayesian statistics, we think of the parameters as just another random variable, with its own distribution???? 11

Estimation: Laplace Smoothing Laplace s estimate: Pretend you saw every outcome once more than you actually did H H T Can derive this estimate with Dirichlet priors (see cs281a) Estimation: Laplace Smoothing Laplace s estimate (extended): Pretend you saw every outcome k extra times H H T What s Laplace with k = 0? k is the strength of the prior Laplace for conditionals: Smooth each condition independently: 12