Particle Filtering ØSometimes X is too big to use exact inference X may be too big to even store B(X) E.g. X is continuous X 2 may be too big to do updates ØSolution: approximate inference Track samples of X, not all values Samples are called particles Time per step is linear in the number of samples But: number needed may be large In memory: list of particles ØThis is how robot localization works in practice 1
Forward algorithm vs. particle filtering Forward algorithm ØElapse of time B (X t )=Σ xt-1 p(x t x t-1 )B(x t-1 ) ØObserve B(X t ) p(e t X t )B (X t ) ØRenormalize B(x t ) sum up to 1 Particle filtering Elapse of time x--->x Observe w(x )=p(e t x) Resample resample N particles 2
Today ØSpeech recognition A massive HMM! ØIntroduction to machine learning 3
Speech and Language ØSpeech technologies Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Ø Language processing technologies Machine translation Information extraction Web search, question answering Text classification, spam filtering, etc 4
Digitizing Speech 5
The Input ØSpeech input is an acoustic wave form Graphs from Simon Arnfield s web tutorial on speech, sheffield: http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/ 6
7
The Input ØFrequency gives pitch; amplitude gives volume Sampling at ~8 khz phone, ~16 khz mic ØFourier transform of wave displayed as a spectrogram Darkness indicates energy at each frequency 8
Acoustic Feature Sequence ØTime slices are translated into acoustic feature vectors (~39 real numbers per slice) ØThese are the observations, now we need the hidden states X 9
State Space Øp(E X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound) Øp(X X ) encodes how sounds can be strung together ØWe will have one state for each sound in each word ØFrom some state x, can only: Stay in the same state (e.g. speaking slowly) Move to the next position in the word At the end of the word, move to the start of the next word ØWe build a little state graph for each word and chain them together to form our state space X 10
HMMs for Speech 11
Transitions with Bigrams 12
Decoding ØWhile there are some practical issues, finding the words given the acoustics is an HMM inference problem ØWe want to know which state sequence x 1:T is most likely given the evidence e 1:T : 1: T 1: T * = arg max 1: T 1: T 1: T x x p x e x ( e ) = arg max p x, 1: T 1: T ØFrom the sequence x, we can simply read off the words 13
Machine Learning ØUp until now: how to reason in a model and how to make optimal decisions ØMachine learning: how to acquire a model on the basis of data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering) 14
Parameter Estimation ØEstimating the distribution of a random variable ØElicitation: ask a human ØEmpirically: use training data (learning!) E.g.: for each outcome x, look at the empirical rate of that value: p x = ML count( x) total samples This is the estimate that maximizes the likelihood of the data L x p x, θ i i θ = 13 p r = ML 15
Estimation: Smoothing ØRelative frequencies are the maximum likelihood estimates (MLEs) θ = arg max p X θ ML θ = arg max θ i p θ X i p x = count( x) total samples ØIn Bayesian statistics, we think of the parameters as just another random variable, with its own distribution θ arg max ( ) MAP = p θ X = = θ arg max p X θ p θ p X θ arg max p X θ θ ML p θ???? 16
Estimation: Laplace Smoothing ØLaplace s estimate: Pretend you saw every outcome once more than you actually did c x p LAP ( x) = +1! " c x x = c ( x ) +1 N + X +1 # $ p p ML LAP X X = = 17
Estimation: Laplace Smoothing Ø Laplace s estimate (extended): Pretend you saw every outcome k extra times What s Laplace with k=0? k is the strength of the prior ØLaplace for conditionals: Smooth each condition p p independently: LAP, k LAP k, x = x y = c x N + + k k X c x, y c y + k + k X p p p LAP,0 LAP,1 LAP,100 X X X = = = 18
Example: Spam Filter Ø Input: email Ø Output: spam/ham Ø Setup: Get a large collection of example emails, each labeled spam or ham Note: someone has to hand label all this data! Want to learn to predict labels of new, future emails Ø Features: the attributes used to make the ham / spam decision Words: FREE! Text patterns: $dd, CAPS Non-text: senderincontacts 19
Example: Digit Recognition Ø Input: images / pixel grids Ø Output: a digit 0-9 Ø Setup: Get a large collection of example images, each labeled with a digit Note: someone has to hand label all this data! Want to learn to predict labels of new, future digit images Ø Features: the attributes used to make the digit decision Pixels: (6,8) = ON Shape patterns: NumComponents, AspectRation, NumLoops 20
ØInput: pixel grids A Digit Recognizer ØOutput: a digit 0-9 21
ØSimple version: Naive Bayes for Digits One feature F ij for each grid position <i,j> Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image Each input maps to a feature vector, e.g. F 0,0 = 0 F 0,1 = 0 F 0,2 =1 F 0,3 =1 F 0,4 = 0 F 15,15 = 0 Here: lots of features, each is binary valued ØNaive Bayes model: p( Y F 0,0 F ) 15,15 p Y Ø What do we need to learn? p( F i, j Y ) i, j 22
General Naive Bayes ØA general naive Bayes model: Y F n parameters = p( F Y ) i p Y,F F 1 n p Y Y parameters i n Y F parameters ØWe only specify how each feature depends on the class ØTotal number of parameters is linear in n 23
Inference for Naive Bayes Ø Goal: compute posterior over causes Step 1: get joint probability of causes and evidence = p Y, f 1 f n! # p y 1, f 1 f n # # # # # p y k, f 1 f " n p( y 2, f 1 f ) n Step 2: get probability of evidence Step 3: renormalize $ & & & & & & % " $ $ $ $ $ $ $ # $ p( f i c ) 1 p y 1 p y 2 i p( f i c ) 2 p y k i p( f i c ) k i p f 1 f n p( Y f 1 f ) n % ' ' ' ' ' ' ' &' 24
General Naive Bayes ØWhat do we need in order to use naive Bayes? Inference (you know this part) Start with a bunch of conditionals, p(y) and the p(f i Y) tables Use standard inference to compute p(y F 1 F n ) Nothing new here Estimates of local conditional probability tables p(y), the prior over labels p(f i Y) for each feature (evidence variable) These probabilities are collectively called the parameters of the model and denoted by θ Up until now, we assumed these appeared by magic, but they typically come from training data: we ll look at this now 25
Examples: CPTs py pf ( = ony) 3,1 ( = on Y) p F 5,5 1 0.1 2 0.1 3 0.1 4 0.1 5 0.1 6 0.1 7 0.1 8 0.1 9 0.1 0 0.1 1 0.01 2 0.05 3 0.05 4 0.30 5 0.80 6 0.90 7 0.05 8 0.60 9 0.50 0 0.80 1 0.05 2 0.01 3 0.90 4 0.80 5 0.90 6 0.90 7 0.25 8 0.85 9 0.60 0 0.80 26
Important Concepts Ø Data: labeled instances, e.g. emails marked spam/ham Training set Held out set Test set Ø Features: attribute-value pairs which characterize each x Ø Experimentation cycle Learn parameters (e.g. model probabilities) on training set (Tune hyperparameters on held-out set) Compute accuracy of test set Very important: never peek at the test set! Ø Evaluation Accuracy: fraction of instances predicted correctly Ø Overfitting and generalization Want a classifier which does well on test data Overfitting: fitting the training data very closely, but not generalizing well 27